Absolute worst spec I've ever seen. Google needs to be loaded into a cannon and fired into the sun.
> How does this affect browser modifications and extensions?
> Web Environment Integrity attests the legitimacy of the underlying hardware and software stack, it does not restrict the indicated application’s functionality: E.g. if the browser allows extensions, the user may use extensions; if a browser is modified, the modified browser can still request Web Environment Integrity attestation.
Then what's the point? I can make modified bot browser that commits ad fraud as long as I don't use a rooted Android phone?
I don't believe they're being honest with how this will be used. We need to legally regulate remote attestation.
> As new browsers are introduced, they would need to demonstrate to attesters (a relatively small group) that they pass the bar, but they wouldn't need to convince all the websites in the world.
No one can control the web unless every personal computing device on earth is closed source down to the hardware. Digital technology is a double edged sword. I agree that Google would definitely want complete control, but they just can’t do that. We’ll most likely rather see deglobalization of the web with trust shifting from FAANG to states.
> I can make modified bot browser that commits ad fraud as long as I don't use a rooted Android phone?
Yeah, this just incentivizes spammers to copy the parts of Chromium that do the attestation (or whatever browser has source available), and use that to pretend they're Chromium. There will always be workarounds. This seems to kill innovation and allow spammers to flourish.
I suppose I can understand an argument that they want to prevent scraping, but this is absolutely not going to stop that.
Trusted computing is all about ensuring that your machine is trusted to run payloads and you can't observe or interact with them. Sad! I see why the free software people call it treacherous computing
TC is value neutral so the FSF slurs don't make sense. Consider what happens when the machine in question is a cloud VM. Then you can run workloads on a rented machine without the risk of the cloud vendor spying on or tampering with your server. Likewise if the machine gets hacked. These are highly desirable properties for many people. For example Signal uses TC so the mobile apps can verify the servers before doing contact list intersection, keeping the contacts private from the Signal operators.
Another use case is multiparty computation. Three people wish to compare some values without a risk that anyone will see the combined data. TC can do this with tractable compute overhead, unlike purely cryptographic techniques.
Observe what this means for P2P applications. A major difficulty in building them is that peers can't trust each other, so you have to rely on complex and unintuitive algorithms (e.g. block chains) or duplication of work (e.g. SETI@Home) or benign dictators (e.g. Tor) to try and stop cheating. With TC peers can attest to each other and form a network with known behavior, meaning devs can add features rather than spend all their time designing around complicated attacks.
These uses require you have a computer that you do trust which can audit the remote server before uploading data to it. But you can compile and/or run that program on your laptop or smartphone, the verification process is easy.
But exactly because TC is general it doesn't distinguish based on who owns the machine. It doesn't see your PC as morally superior to a remote server, they're all just computers. So yes, in theory a remote server could demand you run something locally and then do a HW remote attestation for it. In practice though this never happens anymore outside of games consoles (as far as I'm aware), because most consumer devices don't have the right hardware for it, and even if they did you can't do much hardware interaction inside attested code.
Why should anyone trust a remote server providing a signed statement of authenticity when Intel[1], MSI[2], Lenovo[3], NVIDIA[4], Microsoft and others keep losing their keys? Even if they haven't lost their keys recently, technology companies don't have a great track record of producing foolproof hardware designs (e.g. recent case of [5]), if foolproof was ever a reasonable expectation. For starters, it's assuming technology such as ptychographic X-ray computed tomography and focused ion beam machining won't become more commonplace and commercially viable to readily break TPM attestation schemes. Or that with wider use of TPM attestation, more effort will be expended into breaking it whereas for the current state with minimal adoption, few people care.
The issue client-side is that if a single vendor or TPM design is compromised, threat actors have ample motive, resources and ability to exploit this compromised hardware, whilst everyone else has few choices, such as dumping at great expensve some more e-waste. And critically, you as a user are blocked by your acceptance of TPM attestation technology from discovering attacks and auditing your own system security, as you ceded control of your own systems. Instead, your systems are controlled by a few technology companies that have a proven terrible track record of fulfilling their alleged intent of keeping your systems and data secure. And why should they care if it doesn't lead to a higher profit at the end of the year?
Shameless plug for the article I wrote 1 year ago now, "Remote Attestation Is Coming Back," which warned that this was coming to the web and had quite a discussion about that idea:
I remember this article! And of course, nothing has improved. It's ironic that in 1992, software publishers proclaimed that piracy would be the end of the computer age in the classic "Don't Copy That Floppy!" campaign. To me, it seems like nearly the opposite thing is going to end the computer age: integrity checking. Most computers are toys that we lease from companies that are bigger and more influential than governments, and using it in ways they don't like is against the law.
Thank you for writing this. Remote attestation for consumer software is such a disturbing idea.
In the past the main adversarial pressure has been exploiting security vulnerabilities ("jailbreaking"), but the software industry is getting its act together re that.
The reason we can still run Linux on our desktops and laptops today, is that Linux was already popular enough back when Secure Boot was specified, so that Microsoft could be convinced to allow Secure Boot to be disabled and/or user-specified keys to be enrolled (and also to sign the bootloader for Linux distributions which follow a specific set of criteria when Secure Boot is enabled). Had desktop Linux not been popular enough, Microsoft would have required all OEMs to not allow disabling Secure Boot or enrolling user-specified keys (as they later tried to do with ARM laptops).
In the present day, are alternative browsers popular enough that we can avoid the worst-case scenario? Do enough people compile these alternative browsers from source code (meaning each binary is slightly different) to make a difference?
The first use case they mention is restricting ad fraud (and, presumably, ad blocking):
> Users like visiting websites that are expensive to create and maintain, but they often want or need to do it without paying directly. These websites fund themselves with ads, but the advertisers can only afford to pay for humans to see the ads, rather than robots. This creates a need for human users to prove to websites that they're human, sometimes through tasks like challenges or logins.
So if this goes forward, websites will be able to call the web environment integrity API to check you are a proper ad-watching human before serving content.
"Your contract with the network when you get the show is, you're going to watch the spots. Otherwise you couldn't get the show on an ad-supported basis. Anytime you skip a commercial or watch the button you're actually stealing programming."
Jamie Kellner's words still ring true today. When corporations make content available supported by advertisements, they are assuming a moral obligation on your part to see those advertisements. Violating that obligation is felony contempt of business model.
But why should I care about the contract when the providers violates it as well? When you provide your services in the country I reside in but refuse to follow our national laws, you have violated the contract as well.
I live in Norway, and even "serious" advertisers shows me alcohol and gambling advertisiments. This is strictly forbidden by norwegian law, yet I have seen multiple advertisements of this kind from Google, Facebook and Discovery. Discovery in particular has just recently agreed to follow the law for television broadcasts, to be fair.
GDPR is also violated a lot, especially by advertising corporations. I have never consented to the vast amount of tracking that I'm subjected to when browsing the internet, even though I have that right.
It's not like they are obligated to provide services to my country either. If european laws are too strict, they can always leave instead of violating our rights.
Kellner did add "I guess there's a certain amount of tolerance for going to the bathroom." But if your bathroom breaks become too frequent, I guess you run the risk of "actually stealing programming".
Google marketing exec: "We need to lock down web browsers so we can make more money by showing ads."
"Ad blockers need to be prevented. The new WEIE APIs will ensure that ad blockers aren't running and that no DRM is being compromised."
"We also want to prevent ad fraud. With WEIE we can ensure that ad clicks are legit and that people are watching the ads we show. If we can't control the operating system like we can on Chromebooks and Android phones, then we need to control the web browser with cryptographic certainty."
I would love to know the personal motivations and moral feelings of those who work on features like this. Are they naive about how these features will be used? Do they not care? Do they not have a personal sense of responsibility for contributing to the end of open, free computing? It's been a while since I took a Big Tech paycheck, but I don't remember being this willing to go build nightmare tech when I was getting one.
> How does this affect browser modifications and extensions?
> Web Environment Integrity attests the legitimacy of the underlying hardware and software stack, it does not restrict the indicated application’s functionality: E.g. if the browser allows extensions, the user may use extensions; if a browser is modified, the modified browser can still request Web Environment Integrity attestation.
Then what's the point? I can make modified bot browser that commits ad fraud as long as I don't use a rooted Android phone?
I don't believe they're being honest with how this will be used. We need to legally regulate remote attestation.
> As new browsers are introduced, they would need to demonstrate to attesters (a relatively small group) that they pass the bar, but they wouldn't need to convince all the websites in the world.
It speaks for itself. Horrid.
reply