Yesterday, the sentiment on Google's early proposal was "company breakups start to make a lot of sense", "Go f yourself, Google", "It's maddening and saddening", "[the people involved] reputations are fully gone from this".
Today it turns out Apple not only proposed but implemented and shipped the actual feature last year. "It could be an interesting opportunity to reboot a few long-lost dreams". "I kind of get both sides here". "I guess I personally come down to leaving this turned on in Safari for now, and seeing what happens". Granted, the overall sentiment is still negative but the difference in tone is stark. The reality distortion field is alive and well, folks.
Probably not the RDF but rather that Google is famously bottoms-up driven, they're asking for feedback in an open forum, and the proposal is by individuals who are responding as individuals. One of them even posted to Hacker News. That unfortunately incentivizes bullying behavior by people who don't like it and hope that if they're nasty enough, the individuals in question will give up.
Apple has no time for any of that. They consider, they plan, they act. You never learn the identities of anyone involved, they don't generally ask for feedback, they often don't even give the justifications for their plans, and squishy tech sentimentalities are considered irrelevant compared to consumer UX. Getting mad at what Apple does on some web forum is no more useful than getting mad at a brick wall.
There are reasons why the "faceless corporation" is a cliché, after all. It's a deliberate policy designed to protect employees.
It does in these cases protect employees, but that's not the design. It's designed to avoid accountability. If a company decides to illegally dump pollution into the ocean or bribe a foreign regime, they by no means want the executives who made those decisions to be easily identifiable. Companies don't go to jail.
They do if other large shareholder executives need a scapegoat. Shareholders will look the other way and protect their profits so if employees are doing something bad but profitable, they are protecting the employee in times of outrage IMO
This is not true. Remember iCloud scanning for CSAM? Even though Apple was simply creating a process to do what everyone else (GDrive, OneDrive) was already doing, only with MORE privacy protections, they scrapped the entire thing after significant backlash.
Consumer voice is powerful. It shouldn't be underestimated.
Please correct me if I'm wrong. Didn't they scan after the photo was taken not just before upload to the cloud? If so, that is very significant into my mind.
"I recommend finding everyone responsible for this and exercising your right to free speech on them. It works for politicians, and it should work on this other flavour of bastard too."
"I believe both of these users are acting in very-bad-faith, and not correctly observing any ethical codes of conduct in Engineering."
"As far as I am concerned the reputation of this Ben Wiser guy is so far down the toilet that there’s practically nothing he can do or say to recover it. Like the old joke goes “you screw a goat once…”"
"The people involved in this concept/idea/proposal should be shamed into retirement. They should never work in the tech sector again. They should be afraid to use their names before first knowing their audience (an agricultural audience would likely be OK)."
"sometimes I don't think constructive replies are appropriate or possible. "
"Magnitude of the malfeasance is so great they deserve to be held to account for it"
And lots more.
I'm pretty sure beyond the personalization of the issue, 90% of the difference here can be explained by ad blockers. There's no deep technical or philosophical principle at work in most of those comments but what's clearly shining through is that tech people block ads a lot, feel they have a right to do so and will get furious at any attempt to stop them. Apple doesn't care about click fraud, ad blocking or spam on the web because those are other people's problems so they limit their remote attestation to the CAPTCHA reduction use case. This use case has the advantage that it improves the browsing experience for Apple users only. HN posters dislike CAPTCHAs as much as the next guy, so nobody cares. But Google want there to be lots of web content that's free to access so also concerns itself with the publisher side of the web, not just the consumer side. They list more use cases and ask for feedback, there are more consumers than creators, so surprise surprise, they get a lot of hate.
Personally, I don't find most of those comments as bullying. Harsh and uncouth, maybe, but in my opinion bullying requires there to not be a cause for the criticism. In this case, there is every cause for criticism.
None of the comments you quote stand out as more than harsh criticism either. There's no bullying going on. The people pushing this proposal should be held to account for their actions, and it's moronic to argue otherwise.
It's the strangest thing. On any other platform, people want freedom. They don't want MS to force them to use Edge. They don't want websites to force them to use Chrome, etc.
With Apple, it's the opposite. Freedom to install third party app? That would be dangerous! Freedom to use iMessage in the browser? That just doesn't make sense! Freedom to use third-party browsers on iOS? I guess most people just don't care about that one.
It's just striking that for every other company, lock-in is bad. But for Apple, lock-in is actively evangelized by the user base.
I think most of their users are just happy with the way things are and don’t want any changes that might screw things up.
Overall they trust Apple to take care of things - that’s why they bought Apple stuff in the first place - and feel that anything that takes control from Apple and could prevent Apple from doing its job, would be bad for them.
Apple's incentives more closely align with mine, than any other megacorp.
And Apple is the only company in the world big enough to take on those other megacorps.
So, Apple certainly isn't perfect. But they're a hell of a lot better than any other megacorp. And I choose them to help defend me against those other megacorps.
Most importantly, I don't want Apple being crippled in their ability to fight the other megacorps.
It’s really not the same intention or implementation.
We should also consider that Apple’s solution is a way to distinguish between human vs. Non human users on an Apple device. It doesn’t allow a service to randomly lockout browsers and/or OS (which Google’s proposal does), just that if you’re already on your Apple device, you don’t have to do a “verify I’m a human” captcha.
It's in the nature of the device you're using: an Apple device. This implies there are non-Apple devices out there who will inevitably fail the check.
It's also in the way Apple allows its use; however that's not as strong. Apple has positioned this as a way to prevent CAPTCHAs from reaching the customer. Apple is interjecting with the provider and saying Hey, trust us. We can prove its a human because they're saying they're on iPhone. They're not positioning this as a way to deny service. Only to speed up access.
Totally agreed. Apple's marketing is simply the best. Google on the other hand, repeatedly let its reputation erode by a loud minority of Google haters without doing any PR to control the narrative. As if they still believe that one echo of "don't be evil" could still reverberate after twenty years.
Yup -- you see this same effect on articles about Apple's draconian OS policies.
When Microsoft bundled IE with Windows that was terrible. But Apple bundling Safari and locking out competing browsers? That's just what's best for the customer.
Not going to argue against your gist (Apple is a megacorp and we should be wary of them as with all such nearly-unaccountable entities) but which definition of monopoly includes profit share?
The difference can probably be explained by nobody having any idea that Apple already implemented a version of this.
And on top of that, Google's reputation of brutal power grabs on the web may make a difference in tone.
Importantly though, we shouldn't frame this as Apple vs Google. I can ensure you that both companies absolutely hate the open web and open computing in general.
Neither seems trustworthy. Google because, I mean, Google, they’ve repeatedly demonstrated that they can’t be trusted, whether regarding privacy or LTS, and Apple because it’s just “trust the device”, which can obviously be gamed by developer accounts or other hacks like simply walking past a rack of devices confirming touch or face recognition.
There’s no stopping fraudulent behaviour, only defensive barriers to dissuade the non-nation-state actors.
Apple is big but insular. I never bought any Apple device so I never had Safari (*) and any attestation affecting Apple's customers didn't affect me. Web sites could harass them but not everybody else. Google is a different beast. Their browser engine can't run on iOS but it runs on Macs and on every other major OS.
(*) there was a Safari for Windows in the early days of the iPhone. It had a Mac UI which was horrible to look at inside Windows. Maybe it was the time Jobs thought web sites were the way to go for the iPhone. Then he realized that an app store would make a lot of money. Nobody gets everything right all the times.
Apple released Safari on Windows when a majority of Windows users were still using IE7. It was a genuine play for a pie of the Windows browser market and its search revenue. It had nothing to do with web apps. When it happened, Jobs positioned its release as a way to make Safari even better, since more people would use it and report issues and bugs.
“We think Windows users are going to be really impressed when they see how fast and intuitive web browsing can be with Safari”, said Steve Jobs, Apple's CEO. “Hundreds of millions of Windows users already use iTunes, and we look forward to turning them on to Safari's superior browsing experience too”.
History demonstrates that actually they didn't and Apple gave up quickly.
Interestingly they also have some benchmark
> [Safari] now it's the fastest browser on Windows, loading and drawing web pages up to twice as fast as Microsoft Internet Explorer 7 and up to 1.6 times faster than Mozilla Firefox 2 (*)
but by reading the more we learn that they benchmarked Safari on a Mac and the other two browsers on a Windows machine.
As the "kind of get both sides" person I come to defend my honour. Honestly I see both (the apple implementation and the google about-to-be as pretty much the same thing just going about it in Apple- and Google- style.
It's not a fan boy thing (and I would hate to be the guy whose github was filled with anger yesterday - it's not his problem, and he should be left alone).
It's just a marker on the journey - we know the rough destination.
(Also Insuspect the second time you hear bad news the community has had time to adjust - FU is usually a first time emotional reaction. Wow I really am into "see the good in both sides". )
Apple makes money by selling goods and services. Google is in the business of targeted advertising. So they have diametrically opposed incentives wrt privacy.
I’m always amazed when people don’t get this. It’s like they’ve bought into Google as part of their identity so much that they are in denial with regards to how Google make their money.
point taken, but as the blog post here and many other comments have pointed out, there is a very sharp qualitative difference between just-Apple and Apple+Google doing this. Apple alone has a minority market share but together they cover enough of the market that many websites would be tempted to only allow connections from trusted clients.
My impression is that Google is, once again, just slow at execution. They probably do want to integrate PATs together with their Google One VPN product.
Since when have the standards been finalized before the early implementation? Much of the web has been browsers trying things out and seeing if they can convince others to join them (or, in olden times, following MS's lead as with AJAX).
This might be where the internet really gets forked, as it's been predicted over and over since the '90s.
On one side, we'll have a "clean", authority-sanctioned "corpweb", where everyone is ID'ed to the wazoo; on the other, a more casual "greynet" galaxy of porn and decentralized communities will likely emerge, once all tinkerers get pushed out of corpnet. It could be an interesting opportunity to reboot a few long-lost dreams.
IMO we've had a version of this fork for several years now, though it was more at the social layer. I've imagined it as a social super-structure of the internet, basically a bubble that represents "society". Microversions of it have existed since the 90's but really came to fruition with the rise of social media (myspace, facebook, twitter, etc). I don't think it's a coincidence that "cancel culture" came soon after, because once you have a virtualized public sphere, it's now a matter of deciding who/what belongs.
It reminds me of the "cozy web" concept, which is defined as walled gardens with community gatekeepers, such as group chats. Small bubbles where you feel safe and not exposed to outside trolls, corporate or advertisers.
Internet anarchists getting excited about the prospect of forking the Internet feels a lot like when a lot of preppers got excited about the potential breakdown of society when Covid hit.
“Finally I can put all my skills to the test, which people have been teasing me about for so long.”
In both cases, this attitude has the problem that they ignore the vast majority of people who would suffer under the new order. Very few people would find their way out of the corporate walled gardens and into the free information superhighway.
> Internet anarchists getting excited about the prospect of forking the Internet feels a lot like when a lot of preppies got excited about the potential breakdown of society when Covid hit.
> “Finally I can put all my skills to the test, which people have been teasing me about for so long.”
Veering offtopic a little, but your comment reminded me, hilariously, that after Stay-At-Home was mandated, my older, "prepper" friends and acquaintances were generally the first to crack and start complaining on Facebook about unfair it was that they were expected to just stay home in their bunkers and not go to bars and shop for their khakis. So much for the rugged self-reliance they loved to crow about!
I can imagine the Internet Anarchists behaving the same way. They'll be, in reality, the first to sign up for the AmazoGoogoMetaAppleInternet so they can keep posting to Social Media and doing their online shopping.
I'm absolutely willing to believe that it wasn't intentional, but you essentially said that being anti-lockdown means that you're weak, right? Regardless of whether that's a true statement, it definitely reads like a culture war sneer.
The US militia that everyone talks about in relation to the 2nd amendment weren't meant to protect the US from Canada. It was in response to a feared slave uprising from within.
The constitution was a set of principles meant to extend beyond the times they were written in.
As flawed as the founding fathers were, they were probably smart enough to understand that the nature of the threats the country would face were likely to evolve over time.
It is 'funny' how much the supposedly anti-government freedum fighters in the US are 100% behind 'back the blue' and the government agency that has the monopoly on the 'legal' use of physical force/violence in society.
Like the Native Americans who were actively raiding settlements and engaging in pretty brutal (by both parties) conflict with the settlers who often responded by generating local militias from normal folk who were then tasked with attacking the native americans.
But no, instead we should pretend it had something to do with shooting a president you don't like. Because that's what "freedom from tyranny" is.
It's a bit sad that a "daytime hacker, night time musician" from Sweden sees "internet anarchists" as something of a slur. I guess punk really is dead.
Besides, it's not about being excited as much as trying to find silver linings in a rapidly deteriorating environment.
Maybe your last sentence was more about some form of nostalgia for a long-lost decentralized Internet than excitedness?
I can certainly sympathize, but I think the best path forward for any anarchist would be to fight the attestation initiatives fiercely, rather than to resign and say “maybe we could have a good web again if we start over fresh”.
That aside, I’m not sure what you are saying with that comment about myself. I don’t think it serves the discussion.
I didn't define myself as an anarchist, you labelled me as such and drew some unflattering connotations to the term. Generally speaking, I don't deal in false dichotomies anyway. I was just saying that this could be the long-predicted inflection point.
Absolutely. Smarter people than me have predicted it at various points over the last 30 years, and it has yet come to fully pass. We are seeing pieces coming slowly together, though.
> More likely is a bifurcation of the internet between West and BRICS
You are using BRICS very liberally here - I don't think Brazil is particularly internet-hostile, and South Africans have more important issues to think about.
Is there a movement towards a more balcanized network? Absolutely - most European countries now have individual DNS blacklists (the UK one is basically at full discretion of an opaque paralegal entity that answers to no-one); Turkey, Iran, and every other Middle-Eastern or South-Asian country (including Israel, India, Pakistan) can and do shut down their networks whenever they see fit; China have had their Great Firewall since Day 1; and Russia, well, they do what Putin likes to do on any given day.
None of that is particularly new though, it's just the usual autocratic crap. Corpweb will be much more cyberpunk.
I read "preppies" as something very different from what you intended, at first, and was really confused, but also got some really funny mental images out of it.
Popped-collar-lacoste-polo madras-shorts-wearing dudes whose only survival skills are knot-tying, trying to get by in the apocalypse. LOL.
Sure, and I think that supports my argument. The law of least resistance rules on the mainstream Internet. Consumers already will pay for devices that surveil them and use their private data to sell them more crap they do not need.
If we expect consumers to choose the open, anarchist, Internet over the corporate clean Internet, then we expect too much of them.
Story idea: This will probably result in the "corpweb" not being archived and thus will be forgotten in the next centuries (given we'll not hit a filter), while the "greynet" will survive the ages and only the vague echoes of the "corpweb" will be preserved through it. Digital archaeologists will struggle to uncover the "secrets" of the lost wisdom hidden in the "corpweb".
Probably already written by someone, but it fits, I guess.
The problem I have with this web attestation concept generally is that I really want it _inside_ my shiny SSO-everywhere-Zero-Trust-at-the-edge-mTLS-everywhere business network.
I also kind of want it in the public-cloud-meets-private-use home environment (that is, my Cloudflare Access tunnels and MS365 business tenant I use for private stuff).
I don’t want it to touch my personal browsing experience or in any way involved in my personal-use browser environments.
These are effectively opposed desires at this point, and it’s a cat-out-of-the-bag technology.
The fundamental problem with current remote attestation schemes is the corporate-owned attestation key baked in at the factory [0]. This allows the manufacturer to create a known class of attestation keys that correspond to their physical devices, which is what prevents a user from just generating mock attestations when needed.
If manufacturers were prohibited from creating these privileged keys [1], then the uniform-corporate-control attestation fears would mostly vanish, while your use cases would remain.
A business looking to secure employee devices could record the attestation key of each laptop in their fleet. Cloud host auditors could do the same thing to all their hardware. Whereas arbitrary banks couldn't demand that your hardware betray what software you're running, since they'd have no way of tying the attestation key to a known instance of hardware.
(The intuition here is similar to secure boot, and what is required for good user-empowering secure boot versus evil corporate-empowering secure boot. Because they're roughly duals.)
[0] actually it's something like a chained corporate signing key that signs any attestation key generated on the hardware, but same effect.
[1] or if the user could import/export any on-chip attestation keys via a suitable maintenance mode. Exporting would need a significant delay of sitting in maintenance mode to protect against evil maid attacks and the like.
I agree with you, but I’m not sure making it taboo/criminal/a regulatory violation/prohibited/etc for device manufacturers to embed keys at manufacturing and enabling the resulting attestation capabilities is the right move either.
If I’m Apple, or Google, or Samsung, then I have a genuine interest in device attestation in my own ecosystem for various good reasons. Apple makes extensive use of this capability in servicing, for example. That makes sense to me.
That’s what I mean by a cat-out-of-the-bag technology. Threat actors, counterfeits, and exploits being what they are in this era, it’s almost an inevitability that these capabilities become a sort of generalized device hygiene check. Device manufacturers don’t have to provide these APIs of course, or allow the use of their device attestation mechanisms, but they’d be pressured to by industry anyway. And then we would have something else.
I do like your idea of having the platform bring keys to the table and requiring some kind of admin privileged action to make them useful. But I wonder if we had started that way with web attestation, would it inevitably turn into this anyway?
There are always genuine interests for various good reasons. The problem is that the limitless logic of software creates a power dynamic of all or nothing. Situations are comprised of multiple parties, and one party's "good reasons" ends up creating terrible results for the other parties. For example, Apple's attestation on hardware they produced now becomes a method to deny you the ability to replace part of your phone with an aftermarket part, or to unjustly deny warranty service for an unrelated problem.
So no, I do not buy the argument that we should just let manufacturers implement increasingly invasive privileged backdoors into the hardware they make, as if its inevitable. With the mass production economics of electronics manufacturing, the end result of that road can only be extreme centralization, where a handful of companies outright control effectively all computing devices. If we want to live in a free society, this must not be allowed to happen!
> But I wonder if we had started that way with web attestation, would it inevitably turn into this anyway?
The main threat with web attestation is that a significant number of devices/customers/visitors are presumed to have the capability, so a company can assert that all users must have this capability, forgoing only a small amount of business (similar how they've done with snake oil 2FA and VOIP phone numbers, CAPTCHAs for browsing from less-trackable IPs, etc). So creating some friction such that most devices don't by default come with the capability to betray their users would likely be enough to prevent the dynamic from taking off.
But ultimately, the point of being able to export attestation keys from a device is so that the owner of a device can always choose to forgo hardware attestation and perform mock attestations in their place, regardless of having been coerced into enrolling their device into an attestation scheme.
I can also imagine an IPv7 with ephemeral addresses based on private keys (like on yggdrasil), and a way for the browser to remember keys if wanted by the user. Authenticate sessions with the "IP address".
Client certs don't guarantee that there is not a rootkit running in kernel space sniffing session tokens, other credentials or data in general from user-space memory.
Attestation does a reasonably well job at that, as you now need a kernel or bootloader exploit.
A "rootkit in kernel space" already requires a kernel exploit, unless what you are really up in arms about is a lack of a verified boot chain (which absolutely does not require remote attestation).
> A "rootkit in kernel space" already requires a kernel exploit
On desktop? Nope, which is the point. Placing a piece of malware is easy without a kernel exploit. On standard Linux distributions that do not use dm-verity and friends, local root is enough - modify the kernel image or initrd in /boot, and you can do whatever you want with very few ways for a system administrator to detect it upon the next boot. The challenge more is getting local root in the first place, especially as a lot of systems now use selinux or at least have daemons drop privileges.
Windows is a bit harder since Windows refuses to load unsigned drivers since the Win7 x64 days (x86 IIRC didn't mandate the checks), but that's not as much of a hurdle as most think - just look at the boatload of cases where someone managed to steal (or acquire legitimately) a signing certificate to ship malware. Getting local root here is probably the easiest of all three OSes IMO, given the absurd amount of update helpers and other bloatware that got caught allowing privilege escalation regularly.
The hardest IMO/E is macOS, where you have to manually boot to recovery to deactivate SIP and they've been phasing out kexts pretty much already, and you get a crapton of very strong warnings if you mess around with them - you have to manually load them.
With attestation and code-signing done right, it's all but impossible to get your code running in kernel space on Linux and macOS without a kernel exploit, the achilles heel will always be who gets signing certificates that allow loading a module.
As I said: "unless what you are really up in arms about is a lack of a verified boot chain (which absolutely does not require remote attestation)". None of what you are talking about requires the attestation piece, only the verified boot chain (which supports codesign through to whatever layer you wish to protect).
The goal of remote attestation is only to be able to prove to a third party that your device is "secured", which does not benefit the user in any way other than awkward/indirect stuff like where in the Google proposal they argue that users have a "need" to prove to a website that they saw an ad (to get free content).
Verified boot chains are one thing, but say you're a bank and you wish to reduce the rate of people falling victim to malware that uses kernel-level privileges to snoop out credentials. The user benefits (at least from your perspective as the bank) from being less impacted by fraud as the banking website will no longer even let the user enter their credentials.
Either you build a massive database of "known good" combinations of hardware, OS, kernel modules versions and corresponding TPM checksums, or you leave that job to a third party - and that is what remote attestation is at its core. Apple has it the easiest there, they control everything in the entire path, while Google has to deal with a myriad of device manufacturers.
Note I massively dislike the path that more and more applications take to restrict user freedom, but I do see why corporations find it appealing.
You can make devices around being unbreachable and self-attesting. Go build a SBC and sink in a block of epoxy.
But they also want the appeal of the open, hackable world-- cheap kit that's advancing quickly, commodity technology and infrastructure.
I am actually sort of disappointed we never ended up with a world of special-purpose sealed devices-- put a proper payment terminal on everyone's desk instead of trusting nobody slapped a keylogger into your browser while you're typing card numbers, for example.
If that happens, governments will just order ISPs and mobile carriers to block greynet access.
As cool as 90's cyberpunk dreams are, to me they always seem to ignore the physical reality that your connection to "the net" always has to go through the chokepoint of an ISP, and that this ultimately is an indissoluble barrier on just how anti-establishment the internet can ultimately be.
Of course that exploited the pervasive POTS network over which we could tunnel ISP traffic.
What is the replacement? Mesh wifi? Guerilla fiber deployments? Or just a bunch of VPN tunnel brokers trying to evade blocklists on the corp-approved ISPs you have to keep using in place of POTS?
There are already a lot of community driven meshed networks all over the world.
It doesn't really take over because so far we are pretty much free to do what we want from our ISP connection. Some countries impose dns censorship but appart from the few dictatures that run their great firewall, it is light censorships as they let people query the DNS server they want.
yep, I think it is a miracle that we can still reach almost any IP from any other one. Only a matter of time until this goes away imo. I‘m already increasingly forced to bounce around VPNs to access websites from different countries
Future routing protocols (already under development) that include end-to-end route verification will be the end of it, I expect. "Democratizing" part of the Great Firewall and building it into the 'net itself. Only way around it will be proxying, and that'll risk getting the proxying-IP partially blocked, if it's caught or any of the traffic is deemed "bad", and requires that you can route to the proxy itself in the first place. Won't be entirely immune to circumvention, of course, but will make selective and region-blocks and throttling more common and more precise, and significantly raise the bar for getting around it (which'll probably be illegal nearly everywhere, too).
... until your phone and OS web browser will refuse to connect to the "greynet", or that just attempting to do so has you categorized as being a potential outlaw.
Still most consumers are unwilling to pay a premium for open hardware software. A thinner phone with a faster experience and better battery life seems to be all we care about. C.F. how repairable devices are a marginal part of the smartphone market.
Most people are not even willing to pay a few cents extra for a banana that didn’t cause cancer on plantation workers.
Right? In the cool indy web who actually cares about mass adoption. There is no way of saying it without being elitist but sometimes you just don’t want everyone in your space.
That already exists in some form today, but you’d regardless have to use an attested device to be able to partake in basic societal functions such as filing taxes, logging into your bank account, and so on.
But the question is, who and what is going to be allowed to connect to the corpweb.
Running an adblocker? Sorry. Using a non-Chromium based browser? Nuh-uh. Running an old machine with no TPM? Sucks to be you. Running a Linux distribution? Tough luck.
Sure, you can have fun with your free decentralized web. But at the end of the day even tinkerers have to log into their gov website to pay their taxes.
You can compartmentalize such activities to a dedicated device, much like one may need a Windows device for certain software or an Android device for features in banking apps that aren't available on the website.
If it comes to this then desktop Linux will go from viable Windows alternative to obscure hobby OS overnight. The inconvenience factor of having your life split between two devices is just too high.
This is the same way that SafetyNet killed alternative ROMs on Android.
Coffee just shot across my desk reading this. You are too deep in a bubble to realize that to 99.7% of people desktop Linux is an "obscure hobby OS", on the off chance they even know what it is.
You're talking about the popularity of Linux as an alternative, I'm talking about its viability. It's viable because it runs web browsers just as well as Windows and that's all the average user cares about.
Regardless, the point is that any alternative, Linux or not, will be dead in the water once WEI rolls out. Doesn't matter how good your OS is, if it can't access the mainstream web it will die in obscurity. The same way that Windows on phones died because it couldn't get all the useful apps.
> on the other, a more casual "greynet" galaxy of porn and decentralized communities will likely emerge, once all tinkerers get pushed out of corpnet.
The entire internet is "corpnet." For this fantasy freezone to happen, actual alternative physical networks would have to be built, the parts that those networks require will have to be sold to consumers without the hardware being locked down or nerfed, and if authorities do not approve of these networks, they'll have to be invisible.
I don't see a technical answer to that. Sneakernets maybe, but dogs can smell hard drives. Certainly not anything wireless, unless there's some sort of geometric arrangement or algorithm that allows them to hide their locations in other signals.
I'm of the clearly minority opinion that the people who run totalitarian governments are neither stupid nor weak. I also believe that the fantasy that there's always going to be an answer (that always looks like teen hackers dressed up like 90s punks in a Gibson Blade Runner urbanscape theme park) is a drug that allows people to take our real situation less seriously.
> actual alternative physical networks would have to be built
I'd say this is an unfounded assumption. Given a choice of two massive changes that I could snap my fingers and will into existence:
1. Grassroots community and individual-run mesh networks of individual dwellings, not controlled by corporate entities, running IP/DNS/HTTPS and other naive protocols already in widespread use.
2. The same corporate-controlled physical Internet we have right now, but with widespread use of protocols that allow for decentralized permissionless identities (nyms), independent of the centrally-adminstered IP/DNS namespaces. Most traffic going to individually-run VPSs or consumer connections.
I would choose #2 in a heartbeat. The only reason I would see that we might need #1 is because #2 failed to gain a critical mass before the ISPs clamped down on non-corporate-endpoint traffic while it still only affects a minority of users. It's also not clear how the networks in #1 wouldn't just borg back up into corporate Ma Dell, or at the very least succumb to government regulation (each a different avenue for authoritarianism).
Can the non-authority web be counter-attestation? That is if your attestation comes back as valid you can't visit the "cool kids" web? If there were sufficiently interesting content, maybe it could break attestation.
> I predict this greynet will be a cesspool of porn, revenge porn, misogyny, racism, bots, and scams.
Probably.
But, the corpweb will be all that plus being a cesspit of unblockable ads and unblockable corporate surveillance.
And, if you are the type to maintain an old school website (not an ad supported SEO blog spam site), I'd guess you'd be more inclined to support the freedom net aka grey net.
This is what saddens me so much about darknet/meshnet/"open confederated systems". They seem to attract the people and content that are not welcome anywhere, and so they show up here because they got kicked out of everywhere else. With the early web it was first research institutions and then hobbyists and early adopters. Nowadays you can't run your bar without some Neo-Nazis showing up and trying to make your bar their local hangout.
Even if you run a hobby site it's way easier to do so with attestation. Especially if that attestation will allow you to uniquely identify the device making a request. It will end DDOS and make bans much stickier than they are today making all sorts of problematic content easier to deal with.
IMHO, there won't be a split like this if attestation or similar proposals come to pass. Simply put the number of problems that come with anonymous users dwarf whatever legitimate benefits that anonymity provides. Everyone will build sites using it because of the problems they solve. And they will ignore the segment that refuses to use them because that segment will be small and a significant chunk of them will use that anonymity to do bad things you don't want on your site.
Maybe I'm wrong but Web Attestation will also be a death knell for Linux devices (not Android/Chrome OS) as far as being able to use them as equal clients to use the Web goes. They're simply too diverse and 'hackable' as a plotform for remote attestation to work reliably and thus they'll be excluded altogether (except a few 'blessed' distros that will then become industry controlled, and not Linux in spirit anymore).
So far, Private Access Tokens are not widely adopted so you can get a feel for the potential Linux experience by browsing the web with iCloud Private Relay enabled. This flags almost every website's anti-spam classifiers, and you end up having to do 3-5 captchas to access anything protected by one. Wikipedia also blocks you from editing: https://meta.wikimedia.org/wiki/Talk:Apple_iCloud_Private_Re....
IME, browsing the web with iCloud Private Relay is much better than Tor, since your client is not outright blocked by websites. I have not browsed the web much behind a VPN, so I can't compare the experiences.
VPN works for almost any internet service and not just web browsing
VPN can be bought outside of a 5 eyes company
Tor is much better at making it easier to hide your browser footprint and thus anonymity browsing across sites as long as you reconnect often and don't change default settings.
Playing devils advocate: how else do you prevent spam without requiring a login on every single web page? Especially in the world of AI-powered spam that can be indistinguishable from humans and automated at scale and can solve captchas.
Spam destroys everything. The open web has been at war with it forever, and soon it will win just like it has won in every other domain that is not completely locked down.
I love the fediverse but I fully expect it be destroyed by spam as soon as it gets big and influential enough to be a juicy target.
The Internet is a dark forest. The future is private encrypted networks, private forums, etc.
Spam even exists where logins ARE required. Look at Reddit or Twitter/X and any web-accessible forum where logins are required. Lots of spam everywhere.
I don’t think attestation will prevent this, it does however, prevent scraping if attestation is required to even view content.
What would prevent bots from using “approved” attester devices to navigate and scrape? Is attestation done by checking what local processes are running?
Based on where the MAU counts are, by your criteria the Fediverse will be safe from spam forever. Which falls into your last point, it's essentially a set of private forums, that interconnect. It's kind of ironic that the idea of the Fediverse apparently being beyond the neuron activation threshold of most people ends up being an effective filter.
I think Private Access Token is a reasonable design, and it should be standardized with multiple attestation providers that any client can use. That seems like it would move the web forward, unlike simply not making headway on the problem of spam and fingerprinting/tracking as an anti-spam measure at all.
I slightly suspect that the only platforms that will actually implement Web Attestation are the ones I'm trying to remove myself from, so I secretly[1] hope this is the catalyst I need to stop going on crappy social networks and video platforms.
I apparently don't have the will power to stop going on these sites so maybe stopping me loading content from the other side is exactly what I need.
[1] Not so secretly now I've mentioned it here I suppose.
I'm hoping for a global catastrophe leading to the end of money and the rise of bartering for fuel in return for food and water with roving motorcycle gangs.
There's a lot of competition in the banking sector, so I don't think banks can afford to start telling customers that they need specific devices to access their online services.
The banking sector is EXACTLY where "cyber 'security'" and "compliance" will mandate for this to be implemented.
When I worked a bank at $oldjob, compliance mandated we had a full-blown anti virus engine (from Microsoft or McAfee, "at your option") deployed in quasi-ephemeral container images.
It does not have to be reasonable, it doesn't have to be a net positive - it just has to tick some box on some compliance sheet for this to be required, and I will never again be able to perform a banking transaction from my personal computer or degoogled phone again.
So what, let's not worry about it until after it's implemented when it's a 10,000 kg gorilla, instead of trying to nip it in the bud now? Is the world going to end tomorrow so lets just eat, drink and be merry for tomorrow we die?
Now is the time to fight this. It will impossible to unravel it once it's been implemented.
> Most banks have barely implemented 2FA, and when they have, they implement SMS.
One reason I slightly swallow my guilt at having a savings account with Goldman Sachs (marcus.com) is that they offer email-based 2FA. I closed my savings accounts at Chase when they enforced SMS-only 2FA.
BTW, I feel slightly less guilty about saving with these banks instead of my actual credit union after my brother-in-law (who has been in the CU world for decades) told me that if a credit union can't offer competitive savings rates, it means they are lacking in opportunities for significant local lending.
That's the problem. They do implement things, and they do them in the worst possible way.
My bank forces me to 2FA trough SMS when I connect from a new IP range. This means that I can't do any banking through them when I'm outside of my country.
I wish they just didn't implement any form of 2FA instead. That would be better than the current situation.
> South Korea knew it had an ActiveX problem way back in 2015, because even then the need to use ActiveX to do business on local websites irked outsiders.
> For locals, the requirement to run the code was so annoying that getting rid of it became an election promise at the nation’s 2017 presidential election.
> That promise has now been delivered: the nation’s Ministry of Science and ICT today (2020) annnouced the service’s planned demise.
Banks might not, but the governments may come to a similar idea, and tell the banks to tell you.
I don't think banks can afford to start telling customers that they need specific devices to access their online services.
They already make demands.
Two of the very large national banks I have accounts with restrict your access if you're not even using the right browser version. One puts a warning in every page. The other won't even let you log in.
To make the second one even worse, it requires a very specific version, not just > $version, so if i update my OS too quickly, it won't let me in.
As far as I know, it's extremely common for banking apps to implement integrity attestation on android. My bank's app only shows a warning message and doesn't restrict anything otherwise, but I've heard plenty of stories of other banking apps that refuse to run.
It's already happening on smartphones with the proliferation of SafetyNet requirements. Once a few generations of Android smartphones have passed and most current devices support the required hardware, all banks can just make SafetyNet a hard requirement and the average non-technical user will be none the wiser.
The same thing can happen on desktop. In fact I'd say it's already happening, with Microsoft making TPM2.0 a hard requirement for Windows. The frog is slowly being boiled.
If this happens, I expect the majority of Windows and Android devices to stop working too. They are also a diverse and hackable platform that is apparently insufficient for a future where I have to attest to owning certain hardware.
> except a few 'blessed' distros that will then become industry controlled, and not Linux in spirit anymore
You know, I hear this a lot but seldom hear the details of how it might happen. Industry-controlled UNIX is the reason Linux exists - if you take the spirit away from Linux, it gets forked into another community project. Unless you're stripping it of it's GPL license, Linux will be "Linux in Spirit" until it stops being used altogether.
New Android phones have hardware-backed SafetyNet, new Windows devices have Trusted Boot (not to be confused with Secure Boot).
Both can and will be used to attest the browser environment. Linux devices will get hit (unless I guess we see locked down signed kernels, Chromebook-like things).
It's really slowly boiling frog situation playing out over the past 20 years. Since Aegis bootloader outlined how trusted computing will be created with predictions of allowing Internet access only to attested devices/people, we seem to be at the brink of somebody flipping the switch. Other predictions contained historic data/web changing as politically convenient with nobody being able to access/view the old original anymore due to only attested devices available.
If people can't use their prefered Linux distros to do banking, or can't connect to social networks,email providers, music streaming services and so on this will mean practically they are forced to switch distros. Which would eventually add more control power to some Distros to what goes into development and what not.
You can see systemd and it's history about how it hold power.
True, but virtualization is big enough (including Microsoft's own Windows365 offerings) that passing "trust" down into VMs will be done. And with SEV there isn't even a way to tamper with things after the attestation process has been completed.
Even assuming the VM workaround works, this would be catastrophic from an usability standpoint.
Linux has been making giant strides towards increasing accessibility and lowering the friction of adopting it as a daily driver, while preserving the freedom to choose any distro you want.
Forcing new users to babysit a second installation in a special VM would be wiping out decades of progress.
> Industry-controlled UNIX is the reason Linux exists
Linux only exists because it is free and it runs free apps for every category of keyboard-driven task a typical user would want.
The answer to my question of how a predator like IBM is going to take out the other non-RHEL based distros is starting to come into focus. This should help Ubuntu get the Mint monkey off its back too.
I assume that IT departments in most orgs will just swap those attestation tokens for a generic "ACME Corp" token at the network layer. And I expect that home routers will give us that option as well.
> That said, it's not as dangerous as the Google proposal, simply because Safari isn't the dominant browser. Right now, Safari has around 20% market share in browsers (25% on mobile, and 15% on desktop), while Chrome is comfortably above 60% everywhere, with Chromium more generally (Brave, Edge, Opera, Samsung Internet, etc) about 10% above that.
I don't agree, in fact I think it's equally as bad for Apple to do it as Google. Apple has completely let us down. If Google forced it through but Apple refused, it would never be practical to enforce it. The numbers may not be as high, but they're plenty high enough that you couldn't cut all iDevices out. Apple and Google and Microsoft are the only three that really matter.
> If Google forced it through but Apple refused, it would never be practical to enforce it. The numbers may not be as high, but they're plenty high enough that you couldn't cut all iDevices out.
Yes. Up until now, the amount of Google bullshit that Safari has saved us all from is _staggering._ It is unfortunate that this won't be another catastrophe deflected.
This is also why I'm concerned about legislation requiring Apple to open up sideloading onto their devices. As much as I love the idea of people having control over their own systems, in practice I'm afraid that it's just going to be the final nail that solidifies Google's complete control over the web all the way out to the client.
While I don't even get your point, most people aren't going to sideload anything. Just like pretty much no one sideloads anything on their Android phone. It's irrelevant.
You should meet my web developer colleagues. They consistently insist that I should switch from Firefox to Chrome any time I point out that something they implemented was not cross-browser compatible. Never once have anyone said that I should switch to Safari to get something to work. I think that speaks volumes.
I guess that's only for Safari, but not Chrome if you have that installed as well? Also, what if you never signed into your iCloud account? Is it impossible to disable?
> Also, what if you never signed into your iCloud account? Is it impossible to disable?
If you're not signed in, then it's not enabled, because the iCloud account is essential to the process: "An Apple server validates your device and Apple ID."
The ability to disable this is entirely irrelevant. If Chrome ships WEI the various Chromium forks will also let you disable it, even if Chrome doesn't. Or you can use Firefox.
The problem is when your bank, tax office or favorite streaming service starts requiring this to let you use their services. The problem is the ability for large fraction of casual users to have this at all.
I actually noticed this (and considered blogging to myself about it) but in practice the only reason why this was not seen as an issue (IMO) is because it being implemented only on Apple platforms meant that there was no possible way you could really limit your services using it. It was just an additional thing people could use as another signal.
However, the Google proposal is explicitly concerned with pushing this as an always-on feature.
> However, a holdback also has significant drawbacks. In our use cases and capabilities survey, we have identified a number of critical use cases for deterministic platform integrity attestation. These use cases currently rely on client fingerprinting. A deterministic but limited-entropy attestation would obviate the need for invasive fingerprinting here, and has the potential to usher in more privacy-positive practices in the long-term.
All Apple implementing it ahead of time is proof of is that anyone hoping Apple will save us is naive.
> A deterministic but limited-entropy attestation would obviate the need for invasive fingerprinting here, and has the potential to usher in more privacy-positive practices in the long-term.
In reality: an attestation will be used along with fingerprinting.
This feels like such a juicy and divisive area to me. There are an immense number of use cases where we'd like to know we're talking to a 'trusted' hardware and software stack on the web. For many years now, we have just assumed there is little to no trust in the stack, and architected and built accordingly. It adds an amazing amount of complexity and cost, limits features, and makes everything way, way harder than if you could assume a trusted stack.
At the same time, as is being pointed out quite vocally right now, 'trusted' is a very, very difficult concept when large tech monopolies are involved.
On the one hand, it's difficult because there are only a few companies in the world that can field large tech teams that deal with persistent threat actors, and therefore, it would be very nice to be able to trust the security promises made. And, if those promises are trustworthy, they are better promises than any individual can make for their own software and platfoms.
On the other hand, if you're a hacker (in the platonic sense), 'trusted' immediately codes to 'monopoly-backed', along with 'probably back-doored by a local government agency' and we head one more step down the primrose path of control, lack of innovation and finally perhaps a fascistic technology future controlled by a few players.
Ultimately, I think the solution here can only be successful if it involves a trustable, open hardware certification technology that's not registry based, e.g. can create strong local proofs that are independently verifiable. There are a few tech companies I know of working on this on the silicon side, but it's a very difficult problem, and I'm not clear if there's really enough demand to make them viable right now.
I guess I personally come down to leaving this turned on in Safari for now, and seeing what happens over the next year or two.
For me it is about for whom the supposed "trust" or "security" is offered: DRM-tech is discussed using these terms, but the goal is to afford trust to the developer or content owner, not the user.
Corporations have broken the trust of paying customers which is why they are not trusted. Corporations were given chances after chances to be responsible and not unethically prevent consumption of content after being paid, but many corporations repeatedly violated that trust (and continue to do so, as I write this) and have never acted in good faith.
I've been following your posts over the past few days, and your philosophy has never been clearer than it is here. You just straight up hate the idea of users having the upper hand over corporations.
>You just straight up hate the idea of users having the upper hand over corporations.
Corporations are not the only ones who want their content protected or their services to be secured against cheaters or spammers. Corporations are the ones most capable to invest into improving security. For example look at VRChat avatar artists vs avatar stealers. My philosophy is that if an artist wants to have their avatar secured from being stolen that it is right for the avatars to be protected from users being able to copy them. It is less that I hate users having the upper hand against these artists, but that I would like the artist's desires to be respected. From the trusted computed standpoint I want indie developers to have a chance in creating things without having to invest as much time and money into experince. If you can protect you scoreboard's integrity by trusted computing then the time you have to spend removing hacked scores or detecting people with cheats or licensing an anticheat goes away. It shifts the responsibility from the developer to the platform.
If you've never had something you've made violated by users and have been unable to stop them you may not be able to empathize with people in those situation wishing for a solution to exist.
So-called "intellectual property" is not property. It cannot be stolen unless someone manages to deprive you of it. You do not own ideas, even ones you come up with yourself - you are merely given a temporary monopoly to encourage you to create more ideas. Or at least that was the deal. A deal that has been broken over and over again by extending the supposedly temporary monopoly to be longer than anyones lifetime. It has been broken by addition of technical means to further restrict the content. It has been broken by content not even entering the commons by the time the copyright expires because it has been lost or depends on onliine components which no longer exist.
>So-called "intellectual property" is not property. It cannot be stolen unless someone manages to deprive you of it.
There are people do not want others to use the avatars they made. This doesn't require the concept of IP or require some definition of stealing. They want their work to be protected and are looking to platforms to protect it.
As an indie dev, the so called "trusted" environment is an obstacle to product distribution, is increasing friction and actively reducing revenue, I don't want any of that.
I would recommend not using it at first. If things start to get problematic you can start requiring a trusted environment for some things. For example you could save a user's highscore even if they are on an untrusted device, but it just won't show up on a global leaderboard.
In practice "trusted Android" means Android shipped with Google adware and spyware (and possibly much more vendor-specific bloatware) which you can't remove. One can clearly see that it might not really be about trust..
If you don't want to be part of "State" society, then you communicate peer-to-peer with your friends. You have to choose whether you want the benefits of the State system and if it is worth the cost.
The problem with most of these systems is they can never cope with any edge cases. This means it works fine for 99% of the population but the other 1% can get stuffed.
It would be like having a robot deny you access to the office after work hours even though you only need to grab your car keys that you forgot. The system is designed to be secure so you can't talk your way past a robot. If it was a human, it would be much easier to reason with them (normally!) and find a solution that works.
Techies gonna tech though. "If there was a problem yo I'll solve it, check out my tech while the DJ revolves it."
> If it was a human, it would be much easier to reason with them (normally!) and find a solution that works.
The Internet has shown that if you drive down the cost of interacting with this human gatekeeper to zero (you can be anywhere in the world rather than a specific place and time), social engineering attacks inevitably result. That's how we get hackers getting into your bank accounts just because they are eloquent and they make a great case reasoning with the human gatekeeper.
Another problem is what is the actual root of the attestation? If it was the means to say, "yes this is a real person" it might be useful but this is simply system attestation so no real way of knowing whether it would stop bad actors from doing bad stuff and whether it would be misunderstood and misused like many other systems (CORs anyone?)
The logical conclusion of this system is "if you have a legit system, you are legit; if you don't you aren't".
I kind of get both sides here. If we take the "see the best of others intentions" then a web that is populated by identified humans (and their authorised proxies!) is likely to be the "cleanest", most ideal web space we can see (a web full of sock puppets and link farms is not ideal).
The clearest end point for this is some government issued digital ID that just asserts who you are, acts as a login etc.
You can see this as a stepping stone to there. if you squint.
Is it the idealism of the 70s coke to life? No. Is it some sane compromise - I think so.
What if we cannot trust our government ? Sorry it is pretty sure that no internet is going to solve that. That's on the real world.
> a web that is populated by identified humans (and their authorised proxies!) is likely to be the "cleanest", most ideal web space we can see (a web full of sock puppets and link farms is not ideal).
Depends on your definition of "ideal", and whether you even want to strive for such an "ideal". To me this sounds more like a "sterile" web. If we temporarily assume that humans won't do what they're experts at (finding ways around that system too), and take at face value that this will lead to this "cleanest" web space, we are still assuming that that's what consumers want. I would argue that the very existence, and success, of the web in the face of approximations to this "ideal" space in the native-app-world disproves this theory. We have the App Store, we have lock-down control and identifiability for apps, and yet the web still manages dominate commerce in the face of this. Consumers still end up going to the web, and arguably increasingly so with things like Figma. So where are the cries for this "sanitized" web? The demand certainly doesn't appear to be on the consumer end, that's for sure.
"clean" / "sanitized" is not the terms I really want. I think a web (living under a democratic legal system) that uses sane forms of digital identity verification will help reduce the ridiculous levels of online fraud we are seeing. (yes citation needed)
To me that's (again under legal / democratic protections) using some centralised public private key (probably) and a curated env and this is (sort of being very generous) a first step towards that world.
I guess I just don't see this "insane level of fraud"(?) By this I mean that it doesn't really affect my experience on the web, even if I were to entertain the idea that it does, in fact, exist. When I think about my annoyances with the web, they aren't about how I am drowning in link farms or whatever -- ironically my true consumer annoyances are quite the opposite: all the big players have stopped providing me value. Google shows me entire pages of ads before any relevant organic search results. Reddit killed my preferred third party client. Meanwhile Twitter, well, you know. Nothing proposed here does anything for that, and I get that it's not trying to solve that, but my point is that none of my "top 10 problems with the web" are being solved by this humongous change. My problems just have nothing to do with sock puppet accounts or whatever. Perhaps that's a top 10 problem that advertisers have with the web, but that's not really super compelling to me (the same way my problems don't seem to be compelling to them). If anything, as stated in various places, these hyper centralized ID systems increase the likelihood that my problems will never be solved. If it becomes even harder than it already is to make a new browser or a new search engine, then I guess I'm just flat out of luck. The era of "reasonable search results" will be solidified as a temporary blip on the timeline of the web.
I do fraud prevention as my job. The "ridiculous levels of online fraud" isn't happening. There's been a mild uptick in fraud which you would expect from an event that impoverishes millions (covid), but all those people on your favorite website that are so crazy they must be bots spreading misinformation? Nope, those are almost entirely real people. Millions of people are just that awful, hateful, spiteful, dumb, whatever.
> then a web that is populated by identified humans (and their authorised proxies!)
You wrote "I kind of get both sides here", but, to be clear, this is the polar opposite of both the WEI proposal and Apple's thing, both of which go to some lengths to not allow identification of actual humans (they focus on proving that the device is legit).
Because the proving the human is legit is also the next obvious step, it's where many governments (democratic and not!) are heading. I mean it's not a huge leap to go from "this device number 1234 is legit to "paul bought device 1234" or "paul used device 1224 to access bank account 5678".
If you require some kind of authentication process to prove your identity, it doesn't matter whether your device has TPM-supported device attestation or not. If Apple or Google wanted to do that, they already have the in-browser infrastructure for it in the form of login with Apple or login with Google. Making such a thing anonymous for third parties (so they just know it's a human, rather than which human) would be trivial.
Nobody ever asks why this has all become "necessary" even though it is literally the poignant question. Why do we need such ridiculously strong attestation of identity? How did the Internet get along so far without it, if it's really needed?
Well, nobody is actually proposing this at the moment. Heck, neither Apple nor Google's scheme even gets close. All their schemes purport to do is ensure the "integrity" of the platform.
Integrity how, exactly?
> For example, this API will show that a user is operating a web client on a secure Android device.
So basically, it does not tell you that the user is a unique person, or give you any kind of usable identifier for a person. All it tells you, in case of this example and Apple's, is that the device is not rooted or jailbroken.
In practice, is this concept useful? Only as part of a larger cat and mouse game. Just like copyright protection schemes, remote attestation schemes are limited by what they're actually attesting. Very little can be done to stop cam rips in movie theaters, or any number of in-between steps that exploit the fact that a movie is just a series of pictures and frames of PCM samples at the end of the day. And likewise, devices may be expensive, but there's nothing stopping someone from acquiring many of them to do operations on. In fact, many people already own swaths of Android devices specifically for cheating the system. When they can be had for as cheap as $50 a pop in some cases, it's not really a meaningful barrier.
So what does this actually do? It just makes it more expensive and complex to run bot operations, and if you can raise the cost enough to sink the break-even point of doing so, then theoretically you've won! ... But it won't, because there's a lot to be gained by spamming and scamming people. All of these years of countermeasures and we're not even close to getting there. The amount of money that flows in the industry of cheating these systems is more than enough to just pay the cost.
Adding government IDs to the mix won't change anything. Almost every SPAM operation has a real person behind it, so getting a blind attestation that a person is indeed a citizen tells you almost nothing about them. I think just about the only way that could aid in any way is if it were set up in such a way that you did in fact receive a unique ID for each person, rather than just an attestation that you're dealing with a legitimate thing.
And if that's the end game of the Internet, then honestly, the whole experiment was not worth it.
>How did the Internet get along so far without it, if it's really needed?
By each individual site expending a great deal of effort to identify their users. Or by offloading it to someone else expending a great deal of effort like putting their site behind Cloudflare or restricting e-mails to legit providers.
> By each individual site expending a great deal of effort to identify their users.
Very few sites are putting in any significant effort to identify their users. Those largely predatory sites shouldn't be setting policy for the entire web.
There is absolutely nothing to prevent troll farms from buying dozens of cheap trusted PC systems & using screen share to automate the heck out of these devices. Or plugging in fake mice/keyboards that directly feed input.
Secure hardware feels like it has no upside. It will not even be a speed bump for anyone spreading disinformation at any level of scale. It mildly inconveniences only extremrly unsophisticated/casual bad actors. And it greatly constrains who can make a browser and those with non-Trusted devices, such as Linux users or people who turn off Trusted Boot.
This is not going to work. The governments will create millions of fake identities to spread their propaganda, same way as they are making fake passports for spies.
Yeah good point, lets just give up and let Apple, Google and Microsoft spread their propaganda instead. Which is just going to be US propaganda for everyone with zero chance for me to change how US controlled corporations behave.
At least I have some say who is screwing me when my government is democratically elected (to whichever degree of democracy you have).
> let Apple, Google and Microsoft spread their propaganda instead.
Who said you had to choose between these two scenarios again? It's so bizarre that people see government as an oppositional force to government contractors operating under government charters.
But websites don't care about government-issued IDs. They have their own IDs, and to create those you have to fill out a form. If the form is successfully rate limited then the cost and speed at which fake IDs can be created gets prohibitive even for governments, unless you think they only need a small number of accounts.
You don't think the governments will force Apple, Google, etc to attest their things? I mean, they made them provide access to their firehose of data so they could mine it for metadata...
We're talking about the same Apple that's currently threatening to yank some of its most popular products from the UK rather than disable e2e encryption? The same Google that reacted to the Snowden memos by putting the entire engineering division in an encryption Code Red, such that inter-dc links were almost fully encrypted just a few weeks later?
And that's for morally ambiguous cases where the justification is popular and well established things like crime fighting, child porn and so on.
We don't know what will happen in future, but given the story so far, the chances of these companies saying to governments, sure, have 500,000 free accounts so you can spam our users with incompetent political propaganda, is virtually zero.
Yes, we're talking about the same Google that only reacted that way to PRISM when a leaker blew the whistle (do we need to wait/hope for a whistle blower for every government thing they comply with?). The same Apple that moved the data for all Chinese customers to data centers where the CCP can access/monitor them. The same Apple that censors/filters the app store for Chinese users to enforce government policies.
The chances that they would comply with future government requirements cannot possibly be "virtually zero."
China is kinda irrelevant here because western social networks and services are blocked there anyway, so the Chinese government can indeed compel Chinese companies to spam users with political propaganda (and does), but western companies are irrelevant in that process.
For Google and PRISM, I'm sure it won't change your mind, but I worked there at the time and the reaction was genuine. If there were people inside the firm who knew about it at all it must have been a very small group of spies/double agents, and such people were never detected despite a thorough search. Given that it was all based on fiber taps done by telcos though, it's not clear why they'd need any insiders. The assumption of formal cooperation was based on the phrasing of one or two sentences in some leaked documents, but the way the whole thing was set up didn't actually require it so, what those insiders would have been doing was a bit unclear.
Anyway, this is all by the by. We can't know what will happen in future. But if they won't budge on E2E encryption then it seems unlikely they'd be willing to bypass anti-spam measures, which is far more detectable, far less justifiable, and probably doesn't fit within any existing laws.
Thanks, that actually does make me feel a bit better about Prism.
Do you have any experience with how things have changed over the last few years at Google?
I have a friend who said that 2016 was really a turning point in the culture. Prior to that most people were all about liberal values like free speech, and user freedom, but in the last 6 or 7 years it's become very "moderation" or "censorship" friendly (depending on your views), including for things like OP topic. On the plus side he has said that privacy is don't that used to be an after thought of anything, but is now in the cultural zeitgeist, do it's not all bad. Do you have any experience you're willing to share on that?
I left in 2014 so don't know what happened after that. It does seem that 2016 was a turning point for a lot of institutions. The Google that believed in empowering people through "making the world's information universally accessible and useful" was definitely dead by that point, although they still claim that's the mission.
I don't agree that privacy was an afterthought before then. There were a lot of internal controls and privacy considerations had been a part of the design process even when I first joined in 2006. Of course the level of effort ramped up over time as the company grew. The primary constraint then as now was simply that most users trust tech firms, don't include them in their threat model and will reject even tiny amounts of inconvenience in the name of privacy. So that really heavily constrains what can be done. For example it kills most attempts at proper end-to-end encryption, leaving us with this sort of strange pseudo-e2e-encryption that's more a legal hack than anything serious (the company that supplies you with the encryption equipment is your adversary, which makes no sense in any classical conception of cryptography).
Whether this is bad or good really depends on the details and the overall strictness. It seems like none of the articles I've seen on the subject go into depth explaining what makes a device "legitimate."
This could be a really good thing if all it's doing is proving that your device isn't malicious, or being better able to detect whether you are a bot. If our end-user experience doesn't change but we stop filling out CAPTCHAs and seeing Cloudflare bot checker load screens, that would be a big plus.
This could be a really bad thing if it means that the web now will just widely reject alternative browsers or computers that have elevated administrative permissions.
I think if we want to see how this plays out, we can look at the Google Play store. A common example that already exists is that banking apps will block rooted Android devices, and it sounds like this attestation API will have the ability to do something similar.
In my opinion, that situation seems perfectly reasonable, and it also seems like most websites don't have the same incentive to block modified devices as higher security services like banks.
Legit for banks tends to just mean that basic security rules are being enforced, like app separation and credential protection. Technically, what banks and other fraud-sensitive services care about is not whether your device allows you to do things or not, but whether it allows malware to do things. If malware can get root then it can steal credentials that would let it impersonate the user and initiate cash transfers.
That's why Android devices allow you to obtain root and unlock the bootloader but factory reset the device whilst doing it. Banks don't care about that feature because it's not accessible to malware and even if someone does it (e.g. because they physically swipe your phone for a few minutes) the login cookies are wiped in the process.
The problem with rooting or jailbreaking outside of this process is that it could have been done by malware instead of the user - you can't tell post-hoc - and even if it was done by the user, rooted phones often have semi-broken security systems e.g. they turn sudo on or users run random apps as root that were grabbed off anonymous GitHub accounts. From the bank's perspective all this is highly risky both for you and more importantly for them, as ultimately weak security = fraud = reputational and financial risk to the bank.
Still, realistically, what banks care about is devices that were silently rooted by malware (or physical thieves). Individual Linux hackers are such a tiny number of people they'd probably be OK with just letting those people get rinsed if they run malware. The problem is, how do you know which is which?
A meet-in-the-middle compromise for the banking use case is for some neutral standards body to certify OS builds against a set of concretely specified security goals, whether they're open source or not. There's no specific technical problem, it's a social issue that it's expensive to do such audits and open source hackers don't want to pay for things. LetsEncrypt solved the same problem with SSL by just brute forcing the issue with money, which may be the way Google/Apple choose to go here. If you want root on your device to customize your window manager or something then no, don't give yourself root, instead spin a deterministic OS build with whatever changes you would have made using root, ensure the OS build is secure and then submit it for auditing. Done properly the audit can be mostly automatic, e.g. if the SELinux rules match the set found in a base distro that's already trusted, then you can know that credential protection/debug APIs are configured as before, so then you can wave through changes to non-critical OS processes.
What banks really care about is ticking a liability box. If it screws over a bunch of users then they won't care, because every other bank will be doing the same.
Or in other words: phone banking in western countries is a joke, because the people that might've popularized it were shut out of the system before it gained popularity.
Now that's an interesting idea. It keeps options open which is vital. I suspect it will be easier just to have a second "normie" device but I love the idea.
Imagine I am Twitter / Instragram and want to be sure that User X is the human owner of the account and just posted a brilliant comment / photo. Or a bank wanting to be sure to move money to a new account.
I can use webauthn to sign a nonce so I can be sure the device sending the request has access to private key for the HSM / secure enclave.
Now if the device is compromised, the OS is under malicious control, does this still hold? Can We assume the secure enclave is proof of the OS fails? I know the secure enclave is basically sealed off and as it is what signs my nonce then yeah I think even in face of OS compromise the webauthn section works.
Yes the nonce I generated has been signed, so the user has the device, but has the user seen the same content / bank transfer details that I am seeing? The user thinks they are sending grandma ten bucks but actually they are transferring 5000 to dodgy account, think they are ordering a sweater on Amazon but actually sending 10 airpods to a new address.
Yeah secure enclaves and other HSM identity systems work and webauthn would greatly reduce the amount of authentication failures - that's a huge win.
And it leaves anti-fraud measures pretty much where they are now - how likely is it this guy wants 20 airpods sent to Alabama?
The thing this is solving is "can we make the entire device as trustworthy as the secure enclave?" And frankly the answer is no. And if you cannot all you are doing then is saying does this device look like a human eyeball which is adjacent to "you cannot browse the web with adblock on".
So I was wrong earlier. This is all about can we trust the user has seen what content the web server ha seen. And no we cannot unless we can verify the whole stack - hardware to OS to javascript. And we are a long way from that.
Maybe we can build a seperate device for that - but really if we could we can build the first one just as securely.
This is basically trusted computing and who gets to sign which binaries?
Webauthn exists and is great and should be used everywhere.
We are fairly sure that the secure enclave / external HSM is so hard to break we can trust webauthn works at all but APT levels.
Any issues over "is the content the webserver gets what the user saw (ie are they compromised) is an issue of fraud prevention.
I have a seperate device from my bank - i have to enter the contents of the transaction. The trust level then is through the roof. Both devices need it be compromised at same time. one has no internet access.
At this poknt it seems obvious who could act as trust providers for content - anyone able to get a simple HSM into my hands as a seperate device.
So short answer: We need air-gapped HSM with keypads.
Webauthn works great for authentication. we can pretty much trust the secure enclave to verify a nonce even if OS is compromised.
But we cannot trust the content. And it gets high friction. But basically you need the air solex HSM to take a nonce typed in by user and based on human verified values (ie dollar amount) and maybe hash values (though possibly compromised).
And such high friction destroys many assumptions and business models. It might be the best solution - will see if I can find any reading materials
> The clearest end point for this is some government issued digital ID that just asserts who you are, acts as a login etc.
Already exists in a bunch of countries. Works better in some than in others.
The issue is that you don't want everything tied to that ID. In a less than ideal world, ideally the ID would just attest that some random pseudo-ID is real. Like Webauthn, kinda.
Hash and sign functions usually don't allow reversal of those operations. Attestation doesn't require more cryptography.
It's kinda silly to start discussing implementation details of something that doesn't exist. Not to mention considering the alternative which is quite a bit more invasive than having an attested private pseodoidentity would be.
I'm less concerned with reversing hashes and more concerned with tracking via the attlestation provider.
What is stopping them from recording the value returned to you that is then passed to the site you tried to visit? Does the data provided to the integrity checker allow for identification? Could the original vendor pass some value to use in the integrity check to prevent replay attacks, and could that value itself encode your personal information?
> What is stopping them from recording the value returned to you that is then passed to the site you tried to visit?
> Could the original vendor pass some value to use in the integrity check to prevent replay attacks, and could that value itself encode your personal information?
Well that value is most likely a cryptographic signature, a "challenge" or a combination of both. Unless there's some separate payload you can't really hide arbitrary data in hashes/signatures that would be used in such a process.
In the end "could" is a very loose word, PII as such is not really part of the process. In this current (Apple's PAT) case, the information is "you have an Apple device", can't currently hide anything else in that.
Thanks for the response. As a second question, would what prevent someone with an "approved" apple device from firing off a bunch of token requests and then distributing those tokens to different entities for those entities to submit to the origin to pass the validation test?
Good to know. And the rate limits themselves have to apply to user agents in the old style, right? Because there is no identifying information apart from current browser fingerprinting methods. If abused, do we foresee captchas having to be placed as a guard against attlestation abuse?
Remember AllAdvantage? That was a service around the turn of the century that showed you ads on your desktop and paid you for it. But only if you were actively using the PC. People used mouse wigglers to fake it and there was a little arms race.
This tech would be their wet dream. You could tell if a request is from a real browser or from a script. You could disable attestation if an untrusted driver is used (to simulate inputs) or the web browser is automated otherwise. Really disturbing tech.
> You could tell if a request is from a real browser or from a script.
Today websites already know if the request is from a real browser or not just by integrating with reCAPTCHA or hCAPTCHA. This is just taking a very popular category of security product and tightly integrating it with the browser itself.
Today, you can take a philosophical stance and categorically refuse to use any website that uses reCAPTCHA/hCAPTCHA. Tomorrow you can take a philosophical stance and refuse to use any website that uses PAT.
You're missing a huge difference here: A captcha works on top of the existing web. I can use it on any platform to prove that I am a human. Whereas the proposal/implementation here effectively locks out any platform not explicitly allowed by the website operators. That is a huge blow to anything not from Google/Apple/Microsoft. Open source and any potential new entrants to the market would be dramatically limited if not killed entirely.
The big difference then vs. now is that with CAPTCHAs you can (generally) choose to complete them from a wider range of browsers and devices that have no corporate approval (unless it's that one Cloudflare CAPTCHA that gets stuck in an infinite loop). So even if it's painful, you can still access most websites. With attestation you don't have that choice.
Why can't you fake remote attestation? I imagine it's a bit more involved than swapping a user agent but is there some magic mechanism that makes it impossible to spoof?
The token received by the server contains information about your platform albeit not very much beyond "it's an Apple device and we think it's enforcing normal security rules".
Why can't you forge the token? Because it's digitally signed.
Why can't you generate a token for your own website, and then "replay it" to another? Because the token embeds the "challenge" which is random numbers selected by the server. The server compares the challenge in the token with the one it generated, usually it will statelessly hash something about the client connection like a cookie. So you can't just substitute a token from one site for another.
Why can't you generate a token for the real intended site but then grab it out of a real iPhone? Because the iPhone has good security which will stop you from doing that (equivalent to jailbreaking it).
How is the token digitally signed? By (in this scheme) Apple, so we can assume you will find it hard to steal their server side keys.
When do Apple digitally sign such a token? When you present a pile of data to their servers.
What's in that data? Unknown as it's (probably) encrypted, but most likely it is various version numbers and device IDs. At the very least it can change at will. Also, it's going to be signed by a device specific key pair. The device generates the private key on first boot at the factory and that key never leaves the device. Apple record the matching public key. So, now they can identify when a the data pile comes from a real device. Also if someone successfully steals the private key from a device, it can be revoked by Apple server side.
Real RA schemes are more complicated than that usually for privacy and anti-tracking reasons. What's outlined above is a generic, textbook implementation.
So to recap: you can't steal a token because the device's security won't let you, you can't steal the private key you need to get a token because there's no way to extract it from the hardware at all (short of putting the chips under a SEM), and you can't swap one token for another because of the challenge.
This makes sense for iPhones but if attestation is possible on macs, as I believe it is, as well does that not sidestep most of the “equivalent to jailbreak” requirements?
Macs have alternate mechanisms that achieve the same thing. SIP de-privileges the root user, the boot filesystems are cryptographically sealed, and the kernel will prevent apps tampering with each other to at least some extent.
So whilst you can "jailbreak" a Mac you can only do it by following Apple's procedures, which leaves a trace that can detected in the remote attestation. At least I assume that's what's going on from their docs.
> Why can't you generate a token for your own website, and then "replay it" to another? Because the token embeds the "challenge" which is random numbers selected by the server. The server compares the challenge in the token with the one it generated, usually it will statelessly hash something about the client connection like a cookie. So you can't just substitute a token from one site for another.
Your fake iPhone could talk to a cooperating server which presents the same challenge to a real iPhone. In fact, a service could accept challenges, instruct a friendly iPhone to request `/?code=$foo`, then return the friendly's iPhone token to the original client.
The challenge includes the origin and all communication is protected by TLS. I don't know what happens if you add extra trusted root certificates to do MITM attacks, but in principle nothing stops the root store being a part of the remote attestation.
On Intel & similar platforms, some forms of attestation are bidirectional. There is both a remote server attesting to the code's validity and the local device is able to attest that the code is ran in a manner that doesn't permit the user to modify or inspect it. This is the basis of almost all practical DRM methods and is provided under the guise of the Trusted Platform Module.
One interesting application of this kind of technology was to remove the 'analog hole'. When playing protected content, even the video stream from your PC to your monitor is actually encrypted in a manner that ostensibly prevents anyone from interecepting it.
> One interesting application of this kind of technology was to remove the 'analog hole'. When playing protected content, even the video stream from your PC to your monitor is actually encrypted in a manner that ostensibly prevents anyone from interecepting it.
And yet, despite these fucking morons in the standard committees wasting (probably) millions of dollars in implementing CSS, HDCP and whatnot, and often enough bricked existing devices by revoking keys, HDCP strippers remain available for a dozen dollars or so on ebay, or AnyDVD so you don't have to bother with any copy protection at all.
Sorry for the nitpick, but that isn't the analog hole. The analog hole occurs when your monitor displays the video. At that point, you can point a camera at it and record the video, albeit with a loss in quality. Removing the analog hole would require pushing the attestation and encryption one layer further, into your brain or eyes.
Keys sealed in hardware from the factory. SafetyNet already does this on Android from boot up to apps (that use it, which includes shit like McDonalds...). This would extend it potentially up until a website itself.
Really powerful and useful if you need strong integrity. Really really painful if you want full(er) control over your device.
I don't know. There have been farms of iPhones overseas, connected over residential-exiting VPNs in the USA, with warm iCloud accounts, for ages. There is nothing magical about having to own an iPhone in order to bot farm.
Giant social media companies are reacting to slowing growth & customer concerns in the ad market, they don't have a secret forward-looking agenda but a backward looking one that has a lot of diffuse agitations. This can reduce the perception of click fraud, even if it doesn't do much about it in reality.
Maybe not impossible but my understanding is the TPM and the closed source nature of the system level code will make it difficult enough that 99% of users will not be able to do it, which is what industry wants. They're never worried about diehards and hermits. Those people will be confined to their caves & made irrelevant.
That's backwards. It's the diehards (i.e. determined adversaries) they are thinking about. 99% of users are already not doing this stuff. They want a way to continue servicing that 99% and shut out the remainder. That's the whole point.
Google/Microsoft/Apple essentially did this with HTTP/3 too. None of their shipped browsers are able to connect to a non-"CA TLS" HTTP/3 endpoint. To host a HTTP/3 website visitable by a random normal person you have to get continued approval (every 3 months min) from a third party CA corporation for your website.
There are two reasons this is not comparable to the remote attestation proposal that Google is currently proposing:
1. The only things that WebPKI CAs are required to attest to is that domain validation was properly completed and that the private key is not compromised. The system is designed (in both intent and practice) for any website to be able to easily get a certificate, and even the most untrustworthy, undesirable websites can and do get certificates on the regular. In contrast, Google's remote attestation proposal is clearly intended to assess the trustworthiness/desirability of the client.
2. The TLS requirement imposes a burden on website operators but provides a clear benefit for end users, which is totally in line with the Internet's Priority of Constituencies[1]. In contrast, Google's attestation proposal places a burden on end uses for the benefit of website operators, which violates the Priority of Constituencies.
Additionally, I must note that Firefox also requires a TLS certificate for HTTP/3 (as they did for HTTP/2). Not sure why you'd omit Mozilla from your list of browser makers doing this, but it's a misrepresentation to imply that this is something only "mega-corp browsers" do, when there is actually broad agreement that this is a good thing.
Because Mozilla didn't create QUIC or push the QUIC based HTTP/3 through the IETF like Google and Microsoft did. If anything I should've left out Apple, not added Mozilla. But yeah, Mozilla is using the same HTTP/3 libs as everyone else so it's browser is inherently broken too.
But this only becomes a serious problem when HTTP/1.1 support is removed. Mozilla will never remove HTTP/1.1 support from Firefox. Google/Microsoft/Apple are chomping at the bit to remove HTTP/1.1 from their products.
Mozilla was the first browser maker to announce an intent to deprecate non-secure HTTP[1]. Even if they keep HTTP/1.1 support, at some point they will require TLS for it, just as they already do for HTTP/2. This is not something new with HTTP/3 nor is it some big-corp conspiracy.
Well shit. I'm wrong and things at Mozilla are much worse than I imagined.
re: HTTP/2, yes, everyone is well aware it didn't allow HTTP connections from the start. But there was no risk of HTTP/1.1 going away at that time. And you can technically still use a non-CA self signed cert for the implementations of HTTP/2 in major browsers. But it is also a bad protocol like HTTP/3.
I have no idea what conspiracy theory stuff you're going on about. People just haven't thought through the consequences of these design decisions outside of their work headspace bubble. Much like with WEI.
All of these proposals are in fact the same thing. They're designed to obfuscate, complexify, and over-engineer the Internet to fuck people over. I don't care if my Magic the Gathering fan site has an auto-rotated 3 month expiry TLS certificate to "protect" the users. Those of us who have been online before the 21st Century see this exactly for what it is. A cash grab and a increasing mechanisms to tighten the screws of monopolies like AWS, Google, Apple, and Microsoft, and to lock down our devices and the web into a walled garden that only the tech giants can control.
> I don't care if my Magic the Gathering fan site has an auto-rotated 3 month expiry TLS certificate to "protect" the users.
You don't based on your threat model. Other people have other threat models. I don't want potentially tampered/malicious content/JavaScript hitting my browser, I can also simply not visit your site. Such a simplification can not be made on the wide (and hostile) web. TLS is trivial enough to be the norm.
We can also draw parallels with food safety. Feel free to cook whatever you wish however you wish at your own home. If you want to offer it to people passing by on the street you have to follow food safety rules.
Yep. LetsEncrypt is great but everyone centralizing in them is not so great. Normal browsers having the ability to connect to a bare HTTP endpoint in HTTP/3 would solve any problems that might arise from this centralization. It's a straightforwards and easy thing to fix for the HTTP/3 lib devs and mega-corp browsers using those libs. But no one cares about it.
> While TLS 1.3 can still run independently on top of TCP, QUIC instead sort of encapsulates TLS 1.3. Put differently, there is no way to use QUIC without TLS; QUIC (and, by extension, HTTP/3) is always fully encrypted.
Basically there is no HTTP/3 without a TLS certificate.
I'm not sure what "problems that might arise from centralization" might be. There are many different TLS certificate providers from different CA roots.
Is your gripe that you don't like TLS? Judging by how long the migration from TLS 1.1 to 1.2 took, I assume we're at least 10-15 years away from a world where everything is encrypted by default without backwards compatibility (if we ever get there at all).
I'm flabberghasted that that's what you took away from my comments. I thought I was very clear. My issue is the lack of HTTP support in HTTP/3 implementations shipped by the mega-corps (CA HTTPS only). CA TLS is definitely the least worst solution we have and I am not against it. I am saying major browsers' HTTP/3 implementations lack of bare HTTP support in HTTP/3, combined with short TLS cert lifetimes these days, is effectively attestation and that's bad. "Basically there is no HTTP/3 without a TLS certificate." is bad.
It could be slightly mitigated by the mega-corp implementations of HTTP/3 they ship accepting non-CA root based self-signed certs with a scaremongering click-through. But that's also no longer an option. If no company running a CA will give you a cert you'll simply be unvisitable (on HTTP/3). It makes the web something only for commercially approved sites.
It’s the opposite of the attestation scheme under discussion, because it requires the server to satisfy attestation, while the client can be whoever. Whereas the Apple and Google schemes require the client to satisfy attestation.
I agree with you on the necessity of maintaining support for bare HTTP in the Web ecosystem. But I think you’re not likely to get as much support on this, simply because far fewer people run servers than clients.
I kind of doubt that clients will ever connect exclusively via HTTP/3. I think browsers keep bare HTTP support. Maybe at some point it may be hidden behind a client config flag.
So the gripe is TLS then, because that cert is less about attestation and more not wanting Verizon et al. to inject ads or snoop your data between customer and server.
Attestation is at best a transient side effect of proving your ownership/control of a domain name...
There's perhaps better ways for you to articulate this point I think you are trying to make of "closed club" of Certificate Authorities.
That said a basic google search demolishes that point with there being alternatives to LE.
Also bemoaning the 90 day period seems really weird.
Security measures like Cloudflare anti-DDOS reverse proxy rely precisely on widespread TLS, can deny access to any client not performing sanctioned TLS handshake (like curl, scrapers, or even old browsers which by chance can do TLS 1.2).
"There are many different TLS certificate providers from different CA roots."
- Yes but there are only a handful of browsers and they preselect the default CA providers. The average user is not going to be able to configure custom CAs and will effectively be denied service should those pre-selected CAs go rogue.
Kinda surprised there isn't a few CAs that set up Let's Encrypt-like automated infrastructure that charge a small subscription fee for certificates. I'd pay $1-$3/m or so for preventing a mono-culture + big attack surface, but don't really want to give up the convenience of Let's Encrypt.
I know there's a big barrier to entry for being a CA (as there should be), but it shouldn't be impossible.
I think that Let's Encrypt gets a lot of undeserved criticism for being the only major ACME-compatible CA. Yes, it's not so good to have monopolies on authenticity, but Let's Encrypt was founded by Mozilla, the EFF and the University of Michigan and was endorsed by the FSF.
That's a pretty good set of organisations when considering the general status of the internet, and a set that doesn't overlap with the major players in consumer technology (Apple, Microsoft and Google).
I think this is less invasive though. The Google proposal runs before content loaded into the DOM. Which means it can be used to do things like programmatically detect and block code injection like ad blockers.
PATs are purely a server side thing. They don't give this kind of control. And don't perform a signature over the content
PATs give exactly the same control. You could trivially require a PAT on the first page load, before the browser gets to receive any of the content. And header-based protocol can always be converted to a JS-driven protocol just by having the requests be issued from JS.
Content-binding is a necessity for the actual intended use case of these protocols (abuse prevention), but useless for the thing people are afraid of (DRM for the web).
Some of the takes about why attestation is bad seem purposely false because the author dislikes the feature. If attestation isn't triggered then prior behaviour will happen (captcha etc).. this is a progressive feature.. and the point of the attestation isn't to prove you're using an approved device, it's to prove a human is actually present, of which, a verified software stack is needed, otherwise the feature is useless..
The captcha and fingerprinting methods are higher effort compared to attestation, so I think there is an incentive for most sites to get rid of it in favor of attestation.
Does this progressive feature allow every visitor to a site to be uniquely identified with 100% certainty? Because that doesn't sound very progressive.
If that is the case, what stops people from setting up bot farms to gathering millions of valid keys per origin, and then passing them off to requesting users to use with unapproved devices? Seems like the same choke point still exists, in that captchas may be required to prove the token requestor is human.
And since you also could do it maliciously in the sense that you could fire and forget requests if you didn't care about the token, you could spoof your ip across the range of all residential ips to try and force captcha rate limiting on everyone that requests an attlestation.
Even if that's true (apple is a bit ambiguous here because they say passcode login is enough), this is still connected to a need to login physically, it sets a much higher cost of acquisition for most.. plus this isn't relevant.. the tech is still aimed at proving real human interaction, even if it's not perfect at that today..
Once this goes mainstream, someone will offer a service where a robot arm operates an iPhone for you and streams back the screen video. Like remote desktop into a VM, except that it's a real device which passes all of the attestation. And it'll be glorious for spammers to finally be counted as "more human" than actual humans (using Android) ...
ahmm, I did the robot arm already in a previous project where you can control all the fingers remotely over the the internet, we just need to figure out the streaming part and we got ourselves a startup :)
But signing necessarily is happening on the user's device... what is to stop brave/etc from also signing their outgoing requests with the same key your local Chrome install is using? On a mobile device I can see how this would work but how would this ever work on (non-apple) PCs without exposing the key to anyone willing to poke around a bit?
I think the idea is, there is a chain of trust from a TPM (So you don't have access to the private key, ever) through the bootloader, OS kernel, Windows Update, and vendor-blessed web browser, to the server.
So Brave would fail when Windows says, "hm, your hash doesn't match any recent Edge version, so you don't get to issue a key signing request to the TPM."
Or it will allow the request but when it arrives at the server as "Windows, non-Edge browser" they'll hit you with the endless CAPTCHAs or just boot you out as a hacker.
Right but how does edge prove itself to the TPM, what's to stop [insert alt browser here] from performing the exact same actions [insert blessed browser here] performs when it interacts with the TPM. It could even emulate a legitimate browser internally for the sake of argument, but it seems like anything could just pretend to be a blessed browser. Sure, you can hash binaries, but you can just as easily mess with their memory space at runtime after the fact so to the TPM (or whatever system checks the hash) the binary checks out because all the modifications are side-loaded after the binary runs.
It seems to me like you can only guarantee no tampering in an actually locked down system, like modern mobile devices.
> but you can just as easily mess with their memory space at runtime after the fact
You can only do that because Windows lets you do that. That's something that can change.
> It seems to me like you can only guarantee no tampering in an actually locked down system, like modern mobile devices.
Yes, the whole point of remote attestation is to be able to prove to the other party that your device is running an approved and fully locked down OS+browser combo before it sends you any content.
It does this by putting the code that creates this guarantee in the only place that you can't (easily) change: in the silicon of your CPU.
The TPM gathers various data about the system, including if any user process is running with access permissions that could tamper with memory space. It trusts the OS and drivers to do this because the entire stack is cryptographically verified from boot onwards. If the environment is one where an app could be spoofed, this will be included in the attestation request and the attest will fail.
You might be able to get around it by finding a zero day in the Windows kernel, but as soon as Microsoft discovers and patches it their attest server will stop providing attestations for devices until they install the OS update and reboot to reestablish a trust chain.
The browser doesn't interface directly with any of the hardware, the operating system does. And the integrity of the operating system can be attested to by the hardware via a chain of trust all the way to the secure bootloader.
Yeah but what's to stop me from spawning a hidden instance of edge, sending keys etc to it to get it to visit some page, and using either window sub-classing (to hack it's memory space and read the request directly) or a local proxy server to steal the attestation it generates before terminating the request?
Likewise what's to stop you from patching the operating system directly (ok secure boot)
You could also just emulate an entire windows OS + TPM and have the emulator do it it sounds like
Like any scenario where I'm allowed to run arbitrary code within the OS with administrator privileges sounds like you could escape this.
> You could also just emulate an entire windows OS + TPM and have the emulator do it it sounds like
Yes, but your emulated TPM is not on the approved list. To impersonate an approved TPM you would need to pull the keys from a real TPM which requires (probably very expensive) semiconductor lab tools and trashing the chip.
If you did trash the chip whilr managing to successfully pull the tpm keys, could you then use that key to sign requests in an unapproved vm or on metal with a different root tpm?
Apple's attestation infrastructure works for macOS so being mobile isn't required.
What is required:
1. Code signing. The operating system must be able to link files together such that it knows they come from one version of one app, and the app must be code signed so there's a precise way to state "this token was issued to Microsoft Edge v123".
2. Debugger API protections. A program must be able to opt out of ptrace and similar APIs. macOS offers this via the "get-task-allow" entitlement.
3. Program file anti-tampering. The OS must stop one app fiddling with the files of another. macOS does this since Ventura (or rather, apps must have the "app management" permission to do so, and the OS can detect when it's been done).
4. Signing-aware IPC. The OS must allow two processes to connect to one another across privilege levels, such that each side is aware of the signing identity of the other.
5. An attestation service. The OS must offer an RPC server that, given challenge bytes, generates and signs a data structure containing the challenge along with the code signing identity, and enough information to link the attestation key to a secure root of trust.
6. The root of trust. Usually a secure processor that's integrated with the motherboard and firmware. It "measures" the boot process and can deterministically derive keys that are only accessible in certain configurations.
7. All the above must either be protected from administrator access, or elevating to admin/fiddling with any of the components must alter the perceived configuration of the system so it can be detected remotely. On Macs this is done via SIP which de-privileges root, and if you disable SIP so you can modify OS files, then the Secure Element won't give you the same attestation as a normal device.
The key to all of this is that you don't have to lock down the device to do this. Firstly, you can allow arbitrary changes as long as they get measured and honestly reported. Apple let you disable SIP and then you can hack macOS to your heart's content, you just can't pretend to Apple that you didn't do it. Secondly, it's only a very small part of the OS that has to be measured. Basically the bits that enforce address space isolation and secret protection, so, the kernel, the boot process and a few userland IPC servers.
But for example if you install other apps, or customize your OS configuration in most ways, then that's just irrelevant for the purposes of identifying what app is running.
Now, you can build extra and more restrictive rules on top of that. For example once you establish that the TLS session key is owned by a protected app, and that the app is Chrome or Edge or Safari, you can then ask it to answer honestly whether there are certain extensions installed or whether the browser is being automated or whatever else there's a protocol for. But the core infrastructure doesn't know anything about that, its job is over once the core attestation of identity+protection is done.
Windows is (as usual) behind in this tech. The pieces are there but nothing really lines up and nothing is using it, for example, Windows has a notion of package identity and app tamperproofing but Edge doesn't use it. I don't know of any way to opt-out of code injection or debugger APIs on that platform either. But macOS has all the pieces and it hangs together.
Interestingly, so does Linux! There are configurations that can match what macOS does there, and which can also drive the TPM to remotely attest. What's missing is any organizational or community will to audit distributions and figure out which ones implement the criteria above. But if someone were to do that, you could issue TLS-style certificates to the OS that lets third parties reason about the identity of apps on the remote machine.
> But signing necessarily is happening on the user's device...
No, there is signing from a third party server in the chain too. If iPhone A visits website B, then A must provide to B a token signed by Apple in order for it to be trusted.
It also depends on hardware tamper-protected keys that the user can't get to without destroying the device (or at least the keys) in the process.
One interesting (to me) thought is that while HNers are generally worried about what we can lose here (in the curated world) many non techies are likely to see such a bifurcation in car transport.
At some point soon self driving cars are going to get good enough that they can be used on many roads and cities - but I seriously doubt they will be good enough to be used in a mixed environment (pedestrians, human drivers, snow ice etc). So there will be a push to have self driving cars (convenient taxi style) on isolated roads - sort of a weird inversion of pedestrianisation. Maybe Barcelona style super blocks with only 15mph self driving cars allowed? I don't know - but the impact will be people who used to drive their petrol manual cars around town find parts of town locked off. And it's not anti-car it's anti-freedom-car.
We already have that in the form of public transport. The reason people get cars is because they want the convenience of door to door transport. If self driving cars cannot do that then people aren’t going to bother with self driving cars.
> With Safari providing this, it can be used by some providers, but nobody can block or behave differently with unattested clients.
What mechanism prevents websites from blocking or behaving differently for unattested clients? The article doesn't make that clear.
Also: Apple's attestation implementation introduces an external real-time single-point-of-failure, but given that the failure mode is just "show a captcha", it doesn't seem too severe. Is it even possible to implement a broader attestation infrastructure without introducing a similar single point of failure? TLS PKI, for example, does not rely on an external "live" server; the private keys live on the origin.
> limiting access to features or entire sites based on whether the client is approved by a trusted issuer. In practice, that will mean Apple, Microsoft & Google.
Wait, does that mean they decide what websites I can visit? That doesn't sound dystopic at all.
No, it is the other way around. The website decides if your software stack is allowed to view them.
1. Windows to old? Blocked.
2. Running FreeBSD? Blocked.
3. Patched your version of Chrome? Nope.
4. Your Linux distribution isn't approved? Blocked.
5. You want to use some obscure Chromium derivative? Absolutely not.
6. Running a RISC-V machine? LOL get lost.
Basically the user-agent is no longer the agent of the user. Instead the website can select which hardware and software stack can be used to access the website.
The inevitable outcome is that a huge number of sites will only allow OSes from Microsoft, Apple and Google running official browsers by the same three megacorps. Everything else will be blocked.
It is true that there is a large software stack that needs to be "perfect" for this to work flawlessly. But look at iOS. It does appear that highly motivated corporations can keep a fairly secure stack of this size. iOS jailbreaks are few and far between. They do regularly occur, but not often enough for this to be a convenient solution.
Plus it seems that attlestation could easily fix the patching laziness of humans by refusing to work with versions of software that have known 0days. So any published 0day would likely ruin the exploit for the discoverer. Plebs like myself would therefore never have a hope at escape, and would have to exist in the lowsec web. Which I'm okay with, for the record. Just wanted verification of where I would have to live.
I have spent a lot of time working on integrating private access tokens into my project, and I believe I understand how it works. I do not agree with the article’s points on why this bad. PATs are meant to reduce browsing friction, not increase it. Now if you are trying to google something under a spammy vpn node, you get either a captcha or fully blocked. With PAT, your device can guarantee you are not a spammer, and system would let you through without captchas or timing you out. This is all it does. If your device is not capable of signing PAT, then it is supposed to just fallback to default behavior.
You're not thinking fourth dimensionally. Yes that's how it is now. But the majority of users aren't using PATs so if you want people using your website you have to provide a reasonable experience for unverified users. Once 99% of users are using PATs, you'd be stupid not to require them. Severely degrading (through captchas) or outright blocking unverified clients would be trivial. Hell, I'd wager firewall software will even show you these blockages as "blocked attack" or "blocked dangerous client" or something like that.
Even if we assume that captchas are the only thing this can be restricted to (which, if attestation is widely deployed, they won't be restricted to), why is it good to make people using locked-down Apple devices with logged in Apple IDs have fewer captchas? It's a protection racket.
A possible difference between private access tokens and the web integrity proposal is the idea of “holdback” which means that for some reasons chosen at random it would fail to work, and any websites that use it would be forced to have alternative fallback mechanisms.
Why bother, then? This is for things like captchas and credit card risk scores. It’s useful to be able to know that some users are low risk (not a bot, not being phished) and then to have additional verification for others.
It’s listed under “open questions” but I think it would go a long way towards preserving an open web.
The type of websites requiring advanced anti-abuse tactics currently make heavy use of fingerprinting techniques. Google's idea that fingerprinting will cease to exist if this is implemented is a joke.
If all efforts at coming up with a suitable replacement are thwarted, I think it's safe to say that fingerprinting will continue and increase. It's not like businesses are going to stop caring about bots and fraud. Fraud and anti-fraud are big businesses.
As long as the mechanism will be open source standard and isn’t controlled by corporations AND the user browsers are in control to enable/disable it, sure.
> When you put this together, no one entity can link client identity to website activity. And yet, this authorizes access to a website – all while eliminating human interactions.
What mechanism exists to prevent the attester from colluding with the issuer or origin to track users? Could a government subpoena these entities to track entire user history down to the tpm key?
“Private Access Tokens are a powerful alternative that help you identify HTTP requests from legitimate devices and people without compromising their identity or personal information. We'll show you how your app and server can take advantage of this tool to add confidence to your online transactions and preserve privacy.”
—-
That doesn’t sound the same as what Google’s proposing. Am I wrong?
Really hard to have a normal discussion about this.
It's about not seeing captchas on iOS devices. There is a lot of thought that went into the privacy of this solution, just read up on it.
Yet, everyone in the comments I have read so far discuss how bad it would be if a system would personally identify them while denying access to the web for users refusing to be personnally identified.
It's about *captchas* not denying access, and there is no personal identification. Sure you can say "I think it's dangerous and the tech could evolve in the future into something dystopian", but don't immediately start discussing this dystopian solution as if it was the actual proposal, that is simply disingenuous and confusing.
A lot of the push for these things come from fraud, and a lot of fraud comes from badly designed payment systems that allow more fraud in exchange for connivence. How could we reduce this fraud cost by designing a payment system that isn’t so fraud prone and reducing the economic incentive for these things to exist?
> A lot of the push for these things come from fraud
That's not where the push comes from, it's a pretense. They could just as easy want to force your machine to attest that there's no known child pornography on it. Or even that there's not any text or images that could be seen as information about birth control.
How about attesting that an LLM-controlled state-issued daemon is always running on your box, and monitoring everything you do for suspicious activity?
In practice we depend on attestation via federated login and a series of captchas for some indicator of humanness for services that are freely available to avoid significant abuse and attacks.
With captchas no longer being practical there is some need of federated attested value of a network actor?
Maybe there is a use for Blockchain after all :P to anonymously bucket human actions like buying something or device registration chain sourced from manufacturer or cell phone service provider in such a way that it has a short TTL and has to be regularly revalidated.
It need not be perfect but we are entering new territory with LLMs that can easily represent human agency short of some web of trust or signed identity mapped back to something.
Can somebody explain how this is any different to only running in a certain browser, with a certain user agent, or DRM, or with certain authentication etc?
Surely it's up to companies to choose who they want to segregate, it's not a democracy?
“Lockdown Mode is an optional, extreme protection that’s designed for the very few individuals who, because of who they are or what they do, might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature.”
However evil they are, privacy/security appears to be a case of putting their money where their mouth is. Interesting.
This proposal amounts to attestation on the web, limiting access to features or entire sites based on whether the client is approved by a trusted issuer. In practice, that will mean Apple, Microsoft & Google.
I'm afraid that it doesn't mean Apple, Microsoft & Google but the ruling party of governments. In other words, the mix of propaganda and censorship controlled by the party. Truly dystopian.
That is not possible, because the User Agent (UA) does not return a simple boolean value to the endpoints that requests attestation.That endpoint requests a cryptographic proof that originates at a shared root of trust between that endpoint and the device you are using (which has an embedded secret that its user cannot extract, and which was blessed by this shared root of trust with a cryptographic signature at the device's factory). Being able to generate that proof will attest that your device, its operating system, the software it has installed, as well as the UA have all been deemed acceptable (i.e., cryptographically signed by something/someone that root of trust has extended its trust to) by the attestation arbiter - so probably either Apple, Google, or Microsoft.
I have a genuine question: why is this worse than normal captchas which the human must interact with directly? Or is any mechanism that attempts to prevent certain methods of “unattested” web access (e.g. curl or screen readers) bad for the same reasons?
And if the answer to the second question is “yes,” it makes me wonder why we’re even okay with (non-personal) content in the web being login-walled or pay-walled.
Because as long as this mechanism isn’t automated and should be done manually, it would be impossible to have an aggressive captchas for example because you will lose all users, but if most users are now automated, expect a captcha or similar every time you load a page or scroll while using your privacy-friendly browser, rendering the whole process unbearable.
I certainly see how that would be frustrating, but this is also true of login walls or paywalls even without considering any automation. As an example, I think it's bizarre that the Hacker News guidelines encourage links which are paywalled.
>I think it's bizarre that the Hacker News guidelines encourage links which are paywalled.
100% agree!! You can’t even complain about it either! That’s why I always upvote (even though I personally don’t like such systems) who post the archive link for such articles.
Which websites would be interested in this? For online shopping, not so much, if it decreases sales, it is bad for them. Social media? Perhaps, but I can go without that if it comes to that. What I fear most is access to banks, tax office and similar. I need to access that and I currently use Linux. I am not keen on switching.
Apple uses it for its iCloud Private Relay service. The blind token is used so that Cloudflare can verify that a given device pays for iCloud Private Relay without revealing their identity.
Attestation is when such a blind token is proving the integrity of the software running on the device, not proving arbitrary properties. Privacy Pass could actually enable a fast, semi-decentralized system of anonymizing proxies.
If Apple exposed the “is System Integrity Protection enabled” bit to the web, then that amounts to attestation to me. But yes, Apple can do this whenever it wants, and companies want Apple to do it, and it’s scary. They’ve already done this for Apple Pay, Widevine and HDCP.
> At WWDC 2022, Apple announced Private Attestation Tokens. Today, we’re announcing that Cloudflare Access will support verifying a Private Attestation token. This means that security teams that rely on Cloudflare Access can verify a user’s Apple device before they access a sensitive application — no additional software required.
> Private Attestation Tokens do not require any additional software to be installed on the user’s device. This is because the “attestation” of device health and validity is attested directly by the device operating system’s manufacturer — in this case, Apple.
> This means that a security team can use Cloudflare Access and Private Attestation Tokens to verify if a user is accessing from a “healthy” Apple device before allowing access to a sensitive corporate application. Some checks as part of the attestation include:
What prevents the client from receiving a valid token and then passing it off to another entity for that entity to use in their request? Could you have token farms that just generate tokens and provide them to "unhealthy" devices?
> Attestation blocks users' control of their own devices, by design. A key goal is that users using modified software should not be attested as legitimate. I would like to be able to browse the web on my rooted Android phone please. There's no way any fully user-modifiable OS or hardware can ever be attested in the way these proposals intend.
Oh so that shit is why I regularly run into Cloudflare issues on my rooted Android tablet. Not enough that I have to fight cat and mouse with Play Store, Netflix, Google Pay and my banking apps, but now half the Internet. Seems like they ramped up the bullshit silently at the same time for everyone on mobile but not on an expensive iOS device.
I used to respect Cloudflare. Not a single second longer. Fuck you all for being complicit.
Its flat out bad business to exclude customers period. Figure out a way to have your customers not fight in line sure, but never shorten the line unless it is absolutely necessary (ie criminal)
> This feature is largely bad for the web and the industry generally, like all attestation (see below).
> That said, it's not as dangerous as the Google proposal, simply because Safari isn't the dominant browser. Right now, Safari has around 20% market share in browsers (25% on mobile, and 15% on desktop), while Chrome is comfortably above 60% everywhere, with Chromium more generally (Brave, Edge, Opera, Samsung Internet, etc) about 10% above that.
> With Safari providing this, it can be used by some providers, but nobody can block or behave differently with unattested clients. Similarly, Safari can't usefully use this to tighten the screws on users - while they could refuse to attest old OS versions or browsers, it wouldn't make a significant impact on users (they might see statistically more CAPTCHAs, but little else).
> Chrome's usage is a larger concern. With 70+% of web clients using Chromium, this would become a major part of the web very quickly. With both Web Environment Integrity & Private Access Tokens, 90% of web clients would potentially be attested, and the "oh, you're not attested, let's treat you suspiciously" pressure could ramp up quickly.
----
It's bad that Safari is shipping attestation, but a big reason why Safari often gets a pass on negative features that Google doesn't get a pass on[0] is because Chrome has a 60% market share, many sites are tested only in Chrome, and Chrome's marketshare is only likely to grow in the future once we finally get Apple to finally allow alternate browsers on iOS. In contrast, Safari's marketshare is pretty much tied only to iOS and Mac, and they don't even have a monopoly on Mac.
Like it or not, it matters more when Chrome breaks the Internet.
I'm not saying we should ignore Safari (we definitely shouldn't), but if that "double standard" makes anyone upset, perhaps that's a good reason to break Google up and introduce more browser diversity. If Chrome didn't have a 60% marketshare over the entire web, it would be possible to extend more grace to the people proposing experimental features within Chrome.
The extra scrutiny and tougher standards, and even the lower leeway to make mistakes are partially consequences of being the dominant browser in the marketplace. I'm sorry, but the standards are higher when you're in a position where it's possible for you to break everything.
----
[0]: see Manifest V3, which is also based heavily on Safari's own adblocking restrictions, which are similarly harmful to adblockers but tend to get a lot less attention.
So Apple may provide a way to prevent their users from seeing captchas, but their competition is not allowed to. You see why this is a morally bankrupt position to hold, right?
"Tired of seeing all those captchas? Get an iPhone or a MacBook."
It's bad for Apple to add attestation, but it's not a threat to the Open web when they do. It is a threat to the Open web when Chrome does.
If that bothers you, support browser competition and consider breaking up Google. I'm sorry, but it is a fact that it is more dangerous for Chrome to take harmful web positions than it is for Safari to take harmful web positions. That's just the consequence of having a browser monopoly, and Google has to live with that consequence.
Morality has nothing to do with it. I don't support attestation on Safari, but it matters more when Google does it. It's not "fair" because the market isn't fair, there is a dominant player and their actions matter more. Again, if that upsets you, get upset at the unreasonable power dynamics that Chrome has over the Internet. They are the reason for the extra scrutiny.
When Apple is the only company allowed to ship browser features with such high and user visible impact as eliminating captchas, it will directly contribute to them increasing the market share of the devices people use to access the web.
Once the majority of users are on Apple's platforms, the open web doesn't matter. It is whatever Apple wants it to be, which is most likely "dead".
The rules have to be the same for everyone, and the discussions around the WEI on HN have made it clear they aren't. The other threads are filled with massive rants about how evil Google is and how amoral anyone working on this project must be.
But then this thread on how Apple has been doing exactly the same thing has people for the first time engaging with the technological parts, and suddenly the critiques have turned to full-on excuses. "Oh, it's just a little bit bad when Apple does it."
> When Apple is the only company allowed to ship browser features with such high and user visible impact as eliminating captchas, it will directly contribute to them increasing the market share of the devices people use to access the web.
At which point we'll start criticizing Safari more frequently than Chrome. But I don't think you need to worry about that, I can't even run Safari on Linux or Windows in the first place. I already don't test any of my web projects in Safari specifically because I can't, I don't own a device that I can test Safari on. So good luck getting devs to build Safari-only websites. I think it's misguided for us to worry so much about a theoretical future monopoly that we avoid correctly prioritizing efforts to combat a present monopoly.
Of course incidentally, "we" (whatever that means on HN) do criticize Safari all the time. "Safari is the new IE" didn't come out of nowhere. And another reason why this issue in particular matters much less for Safari is because those criticisms seem to have worked and I fully suspect at some point in the next 5 to 10 years it's very possible that Apple will be required by regulators to open up iOS to support multiple browser engines.
And that will be great for certain parts of the Open web, I'm hoping that if iOS opens up its browser restrictions PWAs might get a lot better. But it's also very dangerous because it means Chrome's monopoly will grow even more, and it makes it even more pressing that we deal with specifically Chrome's dominance on the web. So there are plenty of areas where iOS presents a larger threat to user autonomy than Google/Android does (app store policies, user lock-in, sideloading, etc), and I have no shame about subjecting Apple to stricter standards than Google in those areas. This isn't one of those areas.
> and suddenly the critiques have turned to full-on excuses
I'm not offering a single excuse for Safari, it's bad that Safari implemented attestation. I am offering an accurate assessment of the threats that Safari and Chrome currently pose to the Open web.
My standard rule -- consistently applied to everyone in every situation -- is "don't break the Open web", and even with both browsers implementing attestation, Google's implementation is breaking the Open web more than Safari is right now.
> The rules have to be the same for everyone, and the discussions around the WEI on HN have made it clear they aren't.
I don't get to set the rules, or else nobody would be doing attestation anywhere including on native app stores. But it is naive to look at a browser with 20% marketshare doing something harmful and to say, "well, this deserves exactly the same amount of attention as Google." It doesn't. I criticized Brave inserting its ads into webpages, but I'm not going to pretend that my reaction to Chrome doing the same thing wouldn't be a lot harsher, because Brave is not the dominant browser on the web. It's not a double standard to take context into account when prioritizing where coordinated community efforts should go.
In this case, "fairness" for outrage over browser features effectively means ignoring the largest threat in the room to the web and pretending that we're not in a market with a dominant browser. But we are.
> In this case, "fairness" for outrage over browser features effectively means ignoring the largest threat in the room to the web and pretending that we're not in a market with a dominant browser.
Huh, funny that being equally outraged about both isn't an option for you... It would pretty obviously have been useful for achieving the claimed goal of suppressing this feature.
If there had been this kind of backlash (rather than universally positive press) when Apple did it more than a year ago, maybe it would have sent a message.
But since Apple gets a free pass from people who think like you, the outrage didn't happen, and this is now de facto a reasonable feature for a browser to implement.
> Huh, funny that being equally outraged about both isn't an option for you...
Yes, funny that my response is directly proportional to the actual threat posed by both actions, rather than pretending that they're an equivalent risk.
> If there had been this kind of backlash (rather than universally positive press) when Apple did it more than a year ago, maybe it would have sent a message.
There is a limited amount of bandwidth the general public can devote to being outraged. Apple does a lot of stuff that's worth being outraged about. Our job as activists is to draw attention to the biggest threats where public outrage will do the most good. That means prioritizing issues based on the markets in which those issues appear and based on which actors in those markets are most likely to do harm. Of course that doesn't mean only paying attention to Google -- like I said above if this was a conversation about mobile platforms and user choice more generally, I would be speaking much more critically about Apple's lock-in and app-store restrictions than I would be when talking about Google's similar violations. But it does mean paying attention to situations where whataboutism serves to dilute public attention rather than reinforce it or expand awareness.
A reminder that at no point during this entire conversation have I given Apple a free pass; at no point during this entire conversation have I even said a single positive word about Apple. Literally every single comment on Apple I've made during this conversation has been negative.
But no, I'm not equally outraged about both, because they're simply not equivalent threats to the web, and you seem determined to ignore the practical realities of the current browser market in defense of some kind of "fairness", as if amoral structurally anti-consumer companies were somehow humans on trial who deserve equal rights. They're not. You don't have to be fair to them, Google and Apple are not people, they are large corporate systems.
> and this is now de facto a reasonable feature for a browser to implement.
The fact that there is near-universal outrage over Google's position from basically every non-Google source covering it proves that this hasn't turned out the way you're worried. I've seen no one (other than people who would already be defending Google anyway) point at Apple's implementation as if it's a justification for Google's behavior. The statements from press have also been strong and straightforward and haven't tried to excuse the policy with Apple's attestation efforts.
It turns out that focusing outrage intelligently on areas where it will do the most good is actually better practical strategy for consumer rights advocacy than being equally outraged at everything in every situation and pretending that the market is something that it's not.
It is weird to me that condemning Apple isn't enough for you. For some reason it's not enough for you that I say that Apple's actions are bad, I need to be exactly as vehement about drawing attention to Apple as I would be for any other company and I need to expend the exact same resources and need to be pushing for the exact same press coverage. For some reason I need to pretend that Google isn't a unique threat to the web. But... it is.
It is weird to me that you seem to think that user advocacy is about fairness rather than about about achieving a concrete goal and protecting user rights. If you are an activist you are not under an obligation to fight "fair" for user rights, you don't have an obligation to be fair or equitable to companies. You have an obligation to be honest and moral and upfront with users (and I believe that I am, I'm not lying and saying that Apple's implementation isn't bad, I've condemned it in every single comment I've posted in this conversation). But where companies themselves are concerned your only obligation is to make sure that user rights win.
I really don't understand how your comments help with that fight. If anything, focusing attention on Google is a valuable tool for generally educating normal people (and tech-communities) about the dangers of attestation. I think it is unlikely you could have mustered nearly as much public criticism of Apple's actions before news of Google's spec proposal broke.
Holding one company's feet to the fire with more force than you hold another company's is perfectly acceptable, and in fact in many situations is strategically advisable (and I think the current reaction to Google demonstrates that very well). Pretending that every single company is the exact same threat is way more dishonest than saying upfront "both of these examples are bad, but this particular example is dangerous." And Chrome's attestation is more dangerous than Safari's. If you think my behavior is weird, I would counter by saying it's also very strange to me that you seem determined not to acknowledge that fact.
> Private Access Tokens are powerful tools that prove when HTTP requests are coming from legitimate devices without disclosing someone's identity
The value add is pretty clear and good, but the downsides are probably bigger than the value add, so personally I wouldn't say the compromise is worth it.
> Allow Apple devices to browse the web with fewer captcha interruptions
In particular, while using a VPN or Tor. So in one sense it’s even a pro-privacy move, insofar as it allows to distinguish a human user on a legit device using a VPN from a bot using a VPN (upon which the server can present a captcha or denial to the latter, but not the former, making it less onerous for average users to use a VPN).
I hate how websites hinder VPN users. Google Search often shows me captchas on Mullvad, and sometimes blocks my request entirely (no captcha). Screw them.
I don't see the issue here at all. Apple added this because otherwise the web would be completely broken in iCloud Private Relay due to the constant catchas and hardwalls. Google wants to add it to kill adblockers entirely. It's not even the same ballpark.
It is bad because it is yet another way people exercising their freedom have a worse experience than people who accept the warm embrace of daddy Apple.
The fact that a device is locked down should confer zero benefits on the open web as a matter of uncompromising principle.
It's only a pro-privacy move if you completely trust Apple (the mandatory 3rd party approver) both now and in the future to be pro-privacy. The minute their growth slows and the shareholders demand changes, the pro-privacy can go out the window, and they've got the advertisers wet dream ready and waiting, reserved exclusively for them. Corporations are profit-seeking entities by design, and leadership who doesn't act in the best interest of the shareholders (i.e. profit seekers) will be removed and replaced by leadership who does.
im gonna remove even https from my server. gotta go http in protest against this nonsense.
i'm already pissed off that firefox warns people that my site is unsafe for them when i dont even stick a cookie on them and yet provide useful Free software.
They're right though. The browser should have had a mode that ensures integrity without privacy (it's trivial; use PKI to sign the content, send the signature as a header, client validates the signature, and you have integrity over plaintext; or just a form of HSTS, if you don't need PKI, because if HSTS is good enough for certs, it's good enough for anything ELSE, right?). There could be protocol extensions that support clients only loading dynamic or identifying content for specific requests. All sorts of features could allow basic plaintext connections with public content to be as secure as HTTPS.
But the browser oligarchy doesn't want to allow that. They want to force everything to be private, which has caused tons of issues on the internet. And actually, it has strengthened the oligarchy, by forcing us to use private services (such as DNS-over-HTTPS, VPNs, CDNs, etc) which locks more of the internet into the control of a tiny handful of super powerful companies. To the point where if one of them decides to change something, it ripples across the entire internet, and everyone is forced to adopt it or break everything.
Crazier still... HTTPS isn't even that secure! Every year there are examples of valid certs being created for MITM. There are multiple vulns that work at any time. Mitigations that are optional and only a tiny fraction of the web use. And cert expiration, HSTS, and other issues still take down sites accidentally. But they force everyone to use it anyway!
> it's trivial; use PKI to sign the content, send the signature as a header, client validates the signature, and you have integrity over plaintext;
Yes, that's what HTTPS does. I don't know why you'd want to just remove the encryption part.
If you personally want plaintext locally and to cache or whatever, set up a SOCKS proxy you *consent* to. That's the core essence here, consent. Most people don't consent to their ISP collecting analytics or injecting ads, this is why we can't even entertain the idea of leaving things plaintext - the web is too hostile.
> They want to force everything to be private, which has caused tons of issues on the internet.
People also want their things to be private. Where did you get the opinion that it's not something people want.
> Crazier still... HTTPS isn't even that secure! Every year there are examples of valid certs being created for MITM.
If that's crazy then the alternatives are absolutely inane.
> There are multiple vulns that work at any time. Mitigations that are optional and only a tiny fraction of the web use.
Elaborate please.
> And cert expiration, HSTS, and other issues still take down sites accidentally.
Many things (mis)used can cause downtime. That doesn't make it inherently bad. There are just tradeoffs.
> But they force everyone to use it anyway!
You are rather free to not use HTTPS, but browser vendors are really free to warn against such sites for very good reasons.
> I don't know why you'd want to just remove the encryption part.
Because encrypting every connection has caused problems.
1. You can't cache anymore. 90% of the web depends on cached content. Always has. We used to use tiers of web caches, to speed up the web, make it more resilient, reduce bandwidth requirements, etc. But encryption everywhere makes that nearly impossible. CDNs have now become the web's cache, which besides the fact that they now control more of the internet, means caching at local or intermediate networks basically isn't possible now so we lose a lot of network performance, reliability, redundancy. This matters more for users in poorer countries, remote areas, natural disasters, war zones, etc, but it affects rich pampered western users too, because ISPs have a harder time (and more expense) dealing with all the traffic.
2. Governments and companies want to inspect traffic. Yes, I get that you don't want them to. But guess what? They do not care what you think. They will force it to happen one way or another, whether it's subverting internet standards, passing laws to defeat encryption or install backdoors, secretly compromising certificate authorities, hacking into the networks of large service providers, or just straight up requiring you to install a custom CA cert (what all companies do now). All of these things, besides being really bad for our civil rights, cause technical issues that are hard to solve and waste time. Before encryption was mandatory, governments and companies were fine with passively inspecting traffic. But now they have no choice but to go full-on MITM, which now gives them the ability to inject as well as inspect, which is even worse. Again: doesn't matter if you don't want them to inspect your traffic, they are going to do it no matter what, for reasons. You may not think they're valid reasons, but the reasons are there and aren't going away, so neither is this arms-race between the people who have to inspect and the people making inspection impossible.
3. Encryption is being used as a planned obsolescence lever. Older machines and software no longer connect to web servers because of course everything now requires encryption, and the old encryption schemes inevitably become insecure and must be replaced. So now we will be even more locked in to a world that constantly requires purchasing more goods and services to do what we could have done with something we purchased 20 years ago. Creates unnecessary waste, consumerism, expense, and just an annoyance that we have to be constantly upgrading rather than using something old and stable and compatible.
4. Obviously, encryption is slower and more complicated than plaintext, increases the complexity of software and the number of bugs, and requires more powerful chips / more memory to do basic operations over a network (ex. embedded apps), but whatever.
> You are rather free to not use HTTPS, but browser vendors are really free to warn against such sites for very good reasons.
First, no, increasingly HTTP is being blocked or unsupported. But secondly, this is like saying browser vendors are really free to do anything they want, including... like I mentioned... putting in integrity without privacy. But they are also "free" not to do that, leading to all the problems I mention and more. So they are "free" to fuck us over, basically.
You can, if the end-user client consents to it. Caching is also immensely difficult to get right, mistakes cause subtle and annoying issues. Even better, how about those ISPs invest some in the infrastructure in order not to fall over (if it's actually an issue) at the microscopic (by modern standards) bandwidth regular web browsing requires.
> 2. Governments and companies want to inspect traffic. Yes, I get that you don't want them to. But guess what? They do not care what you think. They will force it to happen one way or another, whether it's subverting internet standards, passing laws to defeat encryption or install backdoors, secretly compromising certificate authorities, hacking into the networks of large service providers, or just straight up requiring you to install a custom CA cert (what all companies do now).
So if they take such illegal actions, why make it easier for them? Sounds very defeatist.
> 3. Encryption is being used as a planned obsolescence lever.
Choose better software. A TLSv1.3 stack runs on even microcontrollers with a breeze.
> 4. Obviously, encryption is slower and more complicated than plaintext
It's actually much more straightforward than what's being protected by it. If anything, attack surface is *immensely* reduced to just the rigorously tested TLS libraries instead of all the HTTP, JS or multimedia code paths.
IF you completely ignore the actual problems I listed and invent a different problem to solve and pretend that you're correct?
> 3. Encryption is being used as a planned obsolescence lever.
> Choose better software.
First, it isn't better, it's just newer, and second, it doesn't matter whether or not you want better software. It matters whether a user or use case wants to continue to use an old device or software. If you start deciding for the user what they can or can't, should or shouldn't, do with their computer, now you've become an authoritarian/paternalist, which is objectively a bad thing to be.
> IF you completely ignore the actual problems I listed and invent a different problem to solve and pretend that you're correct?
Requiring consent of the device owner is not a problem, it's a goal.
> It matters whether a user or use case wants to continue to use an old device or software.
Not every use case has to matter for every site operator. That's such an entitled thing to expect it's absurd.
> If you start deciding for the user what they can or can't, should or shouldn't, do with their computer, now you've become an authoritarian/paternalist
No, it's not authoritarian or paternalist. You're still free to visit those sites that wish to support your use-case. It would be authoritarian if you'd force everyone to support some old shit for all eternity for no good reason.
> The browser should have had a mode that ensures integrity without privacy (it's trivial; use PKI to sign the content, send the signature as a header, client validates the signature, and you have integrity over plaintext; or just a form of HSTS, if you don't need PKI, because if HSTS is good enough for certs, it's good enough for anything ELSE, right?).
Can you help me understand this please? Without a trusted CA, anyone can mitm by generating their own public/private keys for the user to pretend to be the destination server. They can then sit in the middle and view/alter traffic as it's passed back and forth between the true destination.
1. "use PKI". That's Public Key Infrastructure, what you know as the CA (Certificate Authorities) part of TLS. Basically, keep everything about CAs that you know about now. But rather than using TLS for the connection, you use plain HTTP. The web server would read the content being sent to the client and cryptographically hash it, then sign the hash using the server's certificate, and put this signature in an HTTP header. When the client receives the plaintext, it would look for the header; without it, it can't validate integrity, and could, like, put up a red bar, error page, whatever. By reading the header, and then validating the signature on the hash, and comparing the content to the hash, it can now confirm 1) the content came from the server, and 2) the content is exactly what the server says it's supposed to be. MITM isn't going to happen unless the MITM attacker can create a valid certificate, to use to hash and sign a modified payload with. Integrity verified, content stays plaintext.
2. "use HSTS". HSTS is a crappy hack that browsers use to say "ok, when you first connect to a HTTPS site, if the site tells you this domain should remain HTTPS, then only use HTTPS to connect to it, until a timeout expires". It's similar to SSH's asking you to confirm a host key on first connection. If this dumb hack (which can be defeated if you MITM the first time they connect, or when it expires) is good enough for the web's security with TLS, it's good enough to, say, cache a host key or certificate, if you didn't want to use PKI above. Again, we're just talking about validating the integrity of data, not a full-blown private secure TLS connection, so we don't need the best security in the world (if we did we wouldn't be using HSTS...)
I am actually rooting for the web to die, so I say bring on all the competition-killing features Google can muster. It's only in the face of an extreme and impossible choice that people will finally wake up. Giving up the entire user-facing compute ecosystem to a hypertext markup viewer has held back technological advancement for nearly two decades. Computer chips are now 4 nanometers and solid state storage is reaching the speed of RAM. But we're still churning out platform content line-by-line, by hand, like digital punch cards, because apparently it's technologically impossible to invent the equivalent of PowerPoint for web pages, or, god fucking forbid, not using HTML, CSS, and JS to create customer-facing applications.
Yes, the web is a great success story. But it's also backwards and ancient, and has been limping along by stuffing an entire operating system into a program made for viewing hypermedia. Are we going to wait 20 years to advance past these limitations? 40 years? 100 years? At what point will people finally fucking say "hey, maybe let's not kill ourselves jumping through hoops just to show pictures of cats and tax prep programs?"
I would give all my karma for the rest of my life if it could convince the industry to make it easier for average humans to create things faster, easier, and better, without writing code.
Do graphics designers have to write code to create a poster for a concert? Do executives have to write code to create a powerpoint slide? Do financial analysts have to write code to balance taxes in excel? Do students have to write code to compose rich-text formatted documents? Do kids have to write code to create entire cities in minecraft? No, no, no, no, no. But if you want to create a personal home page, hoooo boy, get ready to learn 3 SaaS services, 5 languages, 7 frameworks, 14 tools, and 28 paradigms.
When I was 16 and found Visual Basic, I created a networked, graphical application for sharing homework in about a week. Not knowing anything about programming. That is what we should be doing to make applications. Not fucking around with god damn HTML, CSS, JS, Flask, Squashify, Zentillion2, Uberstuber, Flimflam Candygram 4.0, and more. Just give me Visual Basic that will compile for a phone, tablet, and a desktop. We could get so much more done with much fewer people, time wasted, complexity, cost. We wouldn't be beholden to "the web". Computers could actually grow to do more than is allowed by the "browser ecosystem", if we didn't have to use browsers to make and run apps.
Today it turns out Apple not only proposed but implemented and shipped the actual feature last year. "It could be an interesting opportunity to reboot a few long-lost dreams". "I kind of get both sides here". "I guess I personally come down to leaving this turned on in Safari for now, and seeing what happens". Granted, the overall sentiment is still negative but the difference in tone is stark. The reality distortion field is alive and well, folks.