Hacker News new | past | comments | ask | show | jobs | submit login
W3C slaps down Google's proposal to treat multiple domains as same origin (theregister.com)
212 points by kiyanwang on April 9, 2021 | hide | past | favorite | 126 comments



What a weird coincidence that this would really help let's say a company that used google.com as well as google.co.uk and doubleclick.com as their "first party" set. Good that W3C stopped it.


W3C doesn't really have any real power over what happens in the internet today. It's browser vendors associated with WHATWG that draw the straws now.


Given Chrome's market share, that is mostly Google.

The irony that IE fighters are the same that helped Google reached their position, because "developer tools" and "don't like FF UI changes".

Now don't complain, how is it again? Ah, Chrome is available as open source so it isn't comparable to IE.


Chrome is obviously better than IE. Not just based on open source Chromium but generally easier to work with.

I use Firefox because the addons are superior but even if we traded a downright terrible browser dominating the web for a somewhat crappy browser dominating the web, it was still a good move.


Chromium is open source, not Google Chrome.

And much of the documentation about Chromium and V8 is not public.


> And much of the documentation about Chromium and V8 is not public.

Nearly all the documentation is public... Private stuff is mostly accidentally created google docs where the engineer has selected "anyone within google.com with the link" instead of "anyone with the link". Anytime I request one of those documents be opened up, it has been done within a matter of hours.


Former V8er. We moved the vast majority of V8 documentation to public sites. What is non-public is mostly design docs, proposals, strategy, experiments, etc, i.e. the inner workings of the team mechanics. The technical details of V8 are not secret in any way. They may be radically complex, but not secret.


Keeping proposals and design documents private is essentially the same as making a project "source available". It prevents people from participating in the extension of functionality and limits them to being bug fixers.

What Microsoft is doing with .NET is true open source where all proposals are being discussed in public with volunteers improving proposals and suggesting new ones.


> Keeping proposals and design documents private is essentially the same as making a project "source available".

Nah. There are plenty of contributors to V8 that are not part of Google. IBM and MIPS and ARM all contributed significantly to specific machine ports, and we had no trouble keeping them abreast of changes and plans. There are several people who have contributed from Igalia as well. And that's just the people I can think of.

It is harder to contribute to V8 than other open source projects. You have to accept a contributor agreement and use the Chromium code review tools. V8 is a big codebase and slow to build, but it's nothing like what you say.


Correct, that doesn't change the arguments that were being pushed around as defence why Chrome isn't IE.


Indeed, and an advertising company also happens to be the dominant browser vendor.

We're quickly approaching the third "E" in "embrace, extend, extinguish."


Yep, Google had already gone ahead with this anyway.

> Google has already implemented both First Party Sets and SameParty cookies in Chrome 89, the current version, where they are included as an "origin trial" to "allow developers to try out new features and give feedback."


well I mean if Google shipped something in the browser that is now pretty much the only option that really helped their other business interests without getting standards imprimatur that might end up being anti-trust fodder.


Why do you think Firefox isn't an option? Firefox is great.


for the purposes of anti-trust law somewhere around 3% market share isn't really considered an option.


Firefox when combined with Safari are a formidable opponent to Google’s monopoly. The First Party Sets standard couldn’t be forced through because Firefox and Safari wouldn’t accept it.


Until Google implements the behavior and applies it to their sites. Then Google services stop working smoothly on Firefox and Safari because the shared session on those sites is only functional on Chrome. Imagine having to log into gmail, calendar, drive, etc. individually each time. In theory it's not that big of a deal, but in practice could end up driving the casual users to Chrome since the tools they use work better in that browser, and ultimately they don't care which icon they click to check their email.

And it'll be worse if other services decide to apply the new cookie strategy to their services, though they are definitely less inclined to do so than Google would be.


Even Google probably doesn’t want to fuck over 33% of users in Germany. And considering iPhone market share in the US, the same might go there.

And as a fun fact: Edge (12.62%) moved past Safari (11.24%) on Desktop in Germany.


Hopefully! But I also have to look at how well Google Hangouts works in Firefox, and how well Zoom meetings work in browsers at all, and consequently limit my faith that marketshare is a heavy determinant in client behavior.


Google doesn't care if Safari and Firefox don't accept something. See https://webapicontroversy.com

They will release it and then will engage their network of developer advocates and business representatives to try and make developers pressure Apple and Mozilla.

Just some examples of one such rhetoric: https://twitter.com/slightlylate/status/1191027005342404608 and https://twitter.com/slightlylate/status/1369773901610250240 and don't forget, what's missing is your advocacy: https://twitter.com/slightlylate/status/1360364259088027655


These are add-on features, not core cookie functionality.

I’m glad Google pushed a Web USB and Web Bluetooth. I use Web USB / Serial for a browser based microcontroller debugger. I use Web Bluetooth through the Bluefy app to control some Bluetooth devices without App Store apps.

Firefox’s excuse that “security risks of exposing USB devices to the Web are too broad to risk exposing users to them or to explain properly to end users to obtain meaningful informed consent” is infantilizing its users.

I’m a Firefox user, but I have Chrome installed for Web USB. I’d rather a feature exist controversially than not at all.


> "excuse", "infantilizing users"

Where have you been for the past decade? Users provably don't understand security implications of their choices

The entire ad industry in its current form, the entire tracking industry exist solely because of that. Users routinely allow malicious apps full-system access just because those apps ask nicely.

I'd rather not have features than have them rammed through by a company whose only claim on profitability is running ad networks in a web they increasingly control.

And yes, the "core cookie technology" that Google proposes only makes tracking easier.


They have a low market share now, but there's really not much of a barrier to switching. So if Chrome does still go through with implementing this, people can hop right on over to Firefox.


The barrier isn't just on the user side. The initial phase of government sites – i.e. web sites where the vast majority of the public are required to use to do official things — were designed with Internet Explorer in mind exclusively. Now many gov sites officially support Chrome as the default, and sometimes mobile Safari. Right now Firefox is at 2.7% [0] of us.gov visitors, almost half that of Microsoft Edge.

The average user, especially those in business/professional settings, are going to keep Chrome as their default for a long time, even if FF becomes equivalent or even superior.

[0] https://analytics.usa.gov/


So is Opera


Unfortunately Opera is also Chromium-based now, as are a growing number of browsers (edge was the nail in the coffin)


It's also owned by a Chinese company so no guarantee about backdoors for the CCP since it's not fully open source.


Chrome is owned by a US company, so no guarantee about backdoors for the NSA since it's not fully open source.


Correction: it's the single browser vendor that draws the straws now. And it's Google. They've literally stopped caring about any objections to any of their proposals and just ship them.


Just look at Project Fugu. Google is putting the implementation first, the spec to "standards wash" their implementation comes later. And if WHAT WG doesn't want to play along (and why wouldn't they? most browser vendors are now downstream from Chromium so they get the implementation by default) they can just leave it in as "experimental" with a "draft" spec. W3C doesn't really get a say in this.


This isn't any different than most of the internet standard development. "Rough consensus and running code". Most of the internet-drafts and RFCs start out life as prototype implementations, instead of writing specs first, experimental prototypes are developed, and the spec is extracted out of the winners.

People are acting like internet and web specs start life as a standards doc, it's iterated on until finalized, and then vendors start implementing it. That's very far from the truth, and I'd actually say harmful, as design by committee without real world experience often turns out terrible.


The difference is Chrome moves ahead anyway. This is why I'm calling it "standards-washing" -- if you take away the RFC it's no different from the browser wars era or Apple's proprietary CSS extensions for Safari back in the day.

How the standards process is supposed to work is something like this:

1. Someone creates a rough proposal. Discussion happens.

2. Someone creates a proof of concept toy implementation. Discussion happens.

3. Consensus is reached, a spec is written.

4. Other vendors implement the spec. Spec stabilizes with implementers' feedback.

How it now happens is like this:

1. Google writes feature proposal.

2. Google implements the feature behind a server-side flag.

3. Google creates training materials for developers to use the "upcoming" feature.

4. Optional: Google writes an actual spec.

5. Google either scraps the feature or makes it available without the flag.

Of course they "gather feedback" and "ask for input" but concerns from Mozilla routinely get ignored and implementation progresses regardless. It's entirely up to Google and they'll ship it if they like it. The "standard" just becomes a fig leaf.

This isn't entirely new, but the "standards-washing" gives it the appearance of being consensus-driven when in reality it's just more proprietary vendor extensions with marginally better documentation.

Google has an explicit agenda of what the future of the web should look like and they're taking Chrome down that route regardless of whether other vendors agree or not. There's nothing necessarily wrong with this, but consensus-driven or "open standards" this is not.

Contrast this with WHAT WG's promise in the early HTML 5 days: user concerns trump author concerns trump implementer concerns trump academic concerns. Google has decided that it is the sole authority on what users want and uses that to justify ignoring anyone else's concerns or objections.


This generally happens when committees are too far away from the actual development. It happens all the time at companies too with non-coding architects too. Standard setting bodies need to understand the pace of modern development, they spend way to long in the discussion phase. Once their is running code a lot of the discussion is basically over, and it's a choice of writing the spec to match what happens or browsers documenting the "quirks" with the standard.

That's not to say chrome isn't busing their market share. Even if there were more players though, it's a matter of Google gets MS/Apple/Mozilla to agree and nothing else really changes.


Right now Google does not feel the need to get any other browser vendors or really any other party besides Google to agree. They ship things that they don't even agree on internally! Their process has review points but actually acting on that feedback is totally up to the preferences of the person driving a given feature.


Do you have know of a writeup or blog post from someone involved in such actions that cites actual situations where this happened when google railroaded a new standard through?


Unfortunately, no. The closest to a writeup is this: https://webapicontroversy.com/


Chrome has running code, but generally skips the "rough consensus" step.


> That's very far from the truth, and I'd actually say harmful, as design by committee without real world experience often turns out terrible.

Google, however, actively ignores a lot of input, including full-on objections from other browser vendors (who do have real-world experience).

Designed by IE, sorry, Chrome, alone is just as bad.


Even the washing is so bad now, that it doesn't even matter:

- The spec we're discussing was "proposed" sometime in 2019. Here's a comment from WebKit on March 27 2020:

--- quote ---

I notice that this proposal still exists only in a random personal repo. Could it please be contributed to an appropriate standards or incubation group?

--- end quote ---

At sometime they did move it to the appropriate group

- WebHID that is now shipped in Chrome. They asked for Mozilla's position, and Mozilla couldn't even understand the proposal: https://github.com/mozilla/standards-positions/issues/459

And this keeps happening over and over and over and over again.

Their reaction when they are called out? When Mozilla and Safari flat-out refused to implement Constructible StyleSheets as they were spec'ed, Chrome still released them (because their own devs from lit-html relied on them), and said https://twitter.com/slightlylate/status/1220451799032877057

--- quote ---

We often lead, balancing risk/reward rather than demanding a particular point in an arbitrary process.

Leadership is rather the point of having an engine team, after all.

--- end quote ---

That is what they call "leadership".


Isn’t w3c where Apple primarily involves itself? Safari is important


It would also help a company that grows rapidly by acquisition. It's a real beast to deal with cross-domain synchronized authentication.


(googler here, but this is my opinion)

I think there's a big abstraction gap between what we use domains for and what they were supposed to be used for, in a way that we shouldn't assume any ownership only based on the domain itself.

For instance you can have a number of sites that use separate domains but are owned by the same entity (N domains for 1 party). You could also have the same base domain being used for several unrelated parties, think hosting a store on Shopify (1 domain for N parties). This is so ambiguous that even inside the browser you have two different implementations on the way you handle this attribution, one for cookies and one for Single-Origin Policy.

There's a good write up about this problem at https://github.com/sleevi/psl-problems. Sometimes I wonder how the web got here with the amount of kludge that we have to carry.


That's a valid point, but as an internet user the domain is the only attribute visible to me that can determine a site's identity. So, I am pretty happy that the W3C insisted on setting the privacy rule based on plainly visible attributes, not on hidden predetermined or site informed data.

You call the current policy ambiguous, but the proposal wouldn't even ensure that all browsers block or allow the same sets of domains.

Anyway, I am aware that people use multiple domains on the most diverse ways. But it's important to have the technical behavior simple and predictable, and if Google doesn't like the consequences of the way they decided to organize their domains, well, it's their problem for them to fix, not a reason to push complicated unreliable standards on everybody.


>in a way that we shouldn't assume any ownership only based on the domain itself.

I probably have a naive understanding of this, but why not? In your Shopify example, users should certainly be taught that Shopify does have ownership of mystore.shopify.com (in at least the 'security' and 'privacy' areas of concern from the write-up). Likewise, entities proxying resources controlled by others through their own domains (e.g. CNAME cloaking) should take responsibility for what they are serving and how they are exposing their clients.


mystore.shopify.com is definitely hosted by Shopify but it's content is a totally isolated entity. You can trust laptops.shopify.com but this trust should not automatically transfer to fakestore.shopify.com. In the same way if you have a valid account on laptops.shopify.com, the browser shouldn't allow fakestore.shopify.com to emit a request and buy something on laptops.shopify.com with your valid session on your behalf, even though they're on the base domain.

You have also the parallel problem of how do you transfer the trust you have on google.co.uk to youtube.co.jp only based on the domain info you have.

This all to say that using only domain names to resolve ownership is a hard problem, since ages browsers use a crowdsourced list [1] to get around this issue but recently it proved not to scale very well, specially after Apple's move to use this list as part of their "Limit Ad Tracking" solution.

[1] https://publicsuffix.org/


If the proposal was limited to opting into strict origin scope (instead of eTLD+1/registerable domain scope) for cookies and other privacy-related things, it would be an improvement.

But it also allows things like specifying that laptops.shopify.com, laptops.com, laptops.social.com, desktops.com and calculators.com are the same party and therefore tracking may happen across them. There's no obvious good way to put the user on notice of this, and the Explainer totally punts on addressing this problem, instead leaving it to each browser to figure out.


>Sometimes I wonder how the web got here with the amount of kludge that we have to carry.

Really? Um, MS led the way with IE6 and all of the baggage they brought. Google took the mantle with Chrome. Ultimately, it is the browser vendors and their "interpretations" on how to handle things. Rather than working with the governing bodies (yes, they are slow), they forged ahead with their own implementations and forced everyone to follow. No browser is perfect, but the travesty that IE left us with should have been the glaring example of what not to do, but yet Google went and did it again.

The internet is nothing without the browsers. If the browsers didn't do bad things, we'd be in a lot better situation.


IE6 also brought us AJAX, so there is something to be said for browser vendors implementing experimental features on their own.

The difference between nice and awful, of course, is when developers and vendors treat such features as a given without providing graceful degradation (or progressive enhancement, if you prefer) for users of other browsers.


Sure, AJAX is cool. However, devs would not be allowed to release into the wild if the experimental features were not availalable in released browsers. That whole -strict, -transitional type stuff is just another example. Extreme specific personal reco would be to only allow experimental features available to devs that must compile their own browser from source. this would prevent the vast majority of users from ever seeing those experimental features in the wild. Devs could still dev against and find the bugs, but they would also be less willing to make the experimental features requirements on their product.


That isn't remotely the case. Plenty of experimental / one-vendor features have been released and later received wider adoption without fundamentally breaking the user's experience.

I.e. before we had indexeddb, there was websql. The whole PWA slew of features (I.e. service workers) can and are used without breaking user experience.

There can be bad implementations too- youtube's awful performance due to their early version of the web component spec comes to mind- but these features arent really akin to the olden days of ActivX / IE dominance, since they are single purpose and not opening up an entire foreign interface.


The standard and governing bodies has no right to demand anything. The internet is not a democracy. If they want something put the money where their mouth is. Until W3C ships their own browser engine and gets it into the hands of customers, relevance is nothing more than a nice-to-have.


There are no members of the W3C other than browser (and web-server) manufacturers. When the members of the W3C decide on something, that’s the browser manufacturers deciding on that thing.

The real question, I suppose, is whether the people who interact with the W3C/WHATWG/etc. have decision-making power within the organizations they nominally represent. If not, then these bodies are kind of like the UN: nominally a place for treaties to be made, but in practice a place for ambassadors to glad-hand one-another while their leaders ignore the decisions they make.


You seriously want google to be defining what the internet is given their current and past activities and abuses? There is no way that could ever be a good thing. If they ever do expect a 50% carve out for targeted ad technologies.


Years ago, webpages loaded faster by using domain sharding. A big website I had used one domain as its primary, but images were on another domain, and there was yet another domain for resources like css and js.


That's an infrastructure problem. Sure you can solve it with domains, but you can also easily use other current technologies for it with no issues as well and never change the domain name at all.


NCSA was using regional DNS for load balancing in '95. I don't know who invented it but it's at least 25 years old.


How is this faster?

More DNS requests, more TCP connections. At first this is purely slower.


Browsers used to/still do max number of connections per domain. Adding more domains allows you to do more concurrent requests.


With HTTPS you have to handshake for the first connection on each domain.


Yes and it still outweighed that especially if you were loading a lot of images or a stupid amount of different JS/CSS files


Upstreams were very limited, and HTTP requests send cookie data in each GET.

For instance, if you had a 128kbps DSL upstream and each request was 2KB (loaded up with cookies), you're already limited to 8 requests/second. A cookie-less domain for small resources helped this a lot.


Pretty sure in HTTP 1.1, there was a max amount of concurrent requests per server/proxy. So using different domains allowed more rq/s.


A separate domain name for static assets would not receive any cookies of the actual site. This means less data transferred and in theory a proxy closer to the user could more easily cache these requests, even for multiple clients.


More connections = better parallelism = faster downloads

The DNS is cached.


Before HTTP2 this was common


It spreads out the load on disks and outbound bandwidth.


You are making the assumption that the domain pointed to different physical servers. To get the benefits of what everyone else is describing it does not matter what the domains actually are pointing to.


>you can have a number of sites that use separate domains but are owned by the same entity

Why is this even a thing? There's nothing wrong with storeA.shopify.com, storeB.shopify.com... The only valid use case is if you're somehow trying to hide the shared ownership/platform, in which case it's up to you to deal with the downsides.


AFAIK there are big security risks on hosting user generated content on the same domain of your parent website. That was the main reason github.com migrated individual repo pages to github.io [1]. Also shopify doesn't serve their shops on the domain of their main website, they use *.myshopify.com, I suppose for the same reason.

[1] https://github.blog/2013-04-05-new-github-pages-domain-githu...


Even when multiple domains are controlled by the same party, there is often no good way for users to be aware that seemingly unrelated sites with unrelated-looking brands are the same, and therefore (under this proposal) you can be tracked across them.


In my opinion that's something the website owner should handle and it's a tractable problem. It shouldn't be pushed to the browser other than as you mentioned through capabilities like SOP and cookies. The way it is is fine.


Nothing's stopping you from using the one domain as it was initially intended.


That's a good point, the domain should have to match exactly.


Hmmm, what stops Google from doing it anyway? Chrome is the de-facto standard nowadays with every other browser being under 10% market share except Safari holding out around 20%.


Depends what sites you look at - for many consumer sites in the UK Chrome / Safari split on mobile is close to equal and 60%+ of visitors are on mobile

Chrome not being available on iOS is good from an anti-monoculture perspective but not so good from a browser feature perspective


FireFox just follows in the steps of Chrome now. Whatever Google decides Firefox will just go along with it.


As far as consumer usage goes Firefox is effectively dead

It's rare that see FF usage above 1% on any of my customers' sites and I think it's only at something like 2% on gov.uk

Of course there's a debate to be had over how much FF users blocking analytics affects the numbers but the numbers tie up with the log mining I've done too


Germany: FF has 20.4% on Desktop here, and at my work’s website it’s the most used single browser over all platforms (though all Chromium browsers summed up are higher). Caveat: Our website sucks on mobile.


Edge Chrome already surpassed Firefox usage.


Hence why Firefox fully implemented the privacy disaster known as the AudioContext API, which leaks sensitive information about your audio peripherals without your consent or notification, even on sites with no audio whatsoever.

It's abused almost exclusively by ad networks, including Google's DoubleClick on major sites like StackOverflow.

These new APIs are used almost entirely for fingerprinting, and this was implemented after the Chrome team claimed they would carefully consider the security ramifications of new APIs. I guess it doesn't matter if the business unit next door prints money as a result.


This is the first I've heard of this - why is it being downvoted? I'd like to know more.

Do you have any more info about this?


Sure do. StackOverflow's Google ad partner explicitly allows the abuse of AudioContext and other audio APIs for tracking purposes, ruining user security. [1]

These APIs leak sensitive information about your peripherals without your consent or notification [2], and is used rampantly on Google's ad network.

Try it yourself and see. Simply open up the browser console and type: (new AudioContext())

Google Chrome developers have claimed that they consider privacy and security while implementing APIs, while they actively tear down privacy and destroy security (which benefits Google's ad unit). The separation between Google Chrome's security team and DoubleClick, is, in my opinion, non-existent. As another example, DoubleClick has a hard-coded backdoor in Chrome that sends a unique browser install ID as telemetry via headers to DoubleClick domains in all requests. [3]

[1] https://meta.stackexchange.com/questions/332229/stack-overfl...

[2] https://developer.mozilla.org/en-US/docs/Web/API/AudioContex...

[3] https://chromium.googlesource.com/chromium/src/+/e51dcb0c148...


Thank you, I've got some reading to do.


> "Hmmm, what stops Google from doing it anyway?"

The fear of getting a smackdown for being anti-competitive and too dominant, hopefully, much like the one Microsoft got during IE's dominance a couple of decades ago.

Microsoft adopting Chrome's code for its Edge browser is a brilliant bit of jujutsu; not only do they get to join the chorus of voices that say "Google's so powerful and anti-competitive that we had no choice but to adopt their browser engine", they also cut their engineering costs dramatically for their built in browser by having a competitor pay for it.


They are doing it anyway.

> Google has already implemented both First Party Sets and SameParty cookies in Chrome 89, the current version, where they are included as an "origin trial" to "allow developers to try out new features and give feedback."


Exactly: I can see Chrome implementing a slightly different approach where all sites function as they do now, except anything coming Google which is allowed to do what it wants.


(from Google, of course.)


The article says that it’s already implemented in chrome but is disabled for the time being.


The title is a bit misleading. The TAG simply noted this hasn't been thought all the way through, having all reasonable use cases covered. They didn't strike down the idea per se, just noted it hadn't been completely thought out. This is pretty much par for the course in the standards process.


The language quoted in the article sounds a bit stronger than that.

"we consider the First Party Sets proposal harmful to the web in its current form... this proposal undermines the concept of origin, and we see origin as a load-bearing structural pillar of web architecture."


Of course they want to. This way if I accept a cookie on one site, they can track me on all the sites without having to ask. It will make the ubiquitous google account unique ID even easier to set.

But I'm sure they'll say it's to lake shared login policies easier.


They already have that through fingerprinting anyway. This would just be a little easier.


On one hand, W3C doesn't matter anymore. Google is the internet. On other hand there's Apple with their Safari and I have high hopes that even if Chrome implements that, Safari won't so Google will have to play by Apple's rules in the end of the day. Weird power play we are to observe.


That's not how it works. W3C has generally been the codifer of standards with member agreement, W3C generally has not made a standard ahead of an implementation.

Granted, XHTML was probably an example, an exception that proves the rule.


No. The browser vendors left the W3C. When the final browser vendor (Microsoft) left and joined the other vendors in WHATWG, the vendors collectively sent a letter to W3C asking W3C to please stop copying and introducing errors into the standards WHATWG wrote. See e.g. https://www.zdnet.com/article/browser-vendors-win-war-with-w...


Browser vendor here: We're definitely still in the W3C and collaborating in a number of areas there. The WHATWG situation is complicated, but we have not "left the W3C".


I don't know how much pull Safari has. Seems like a lot of web devs don't test on Safari.

I know this isn't damning evidence or anything, but I noticed recently that the header image on front page of Thunderbird's homepage[1] has an image of the app (in a Mac window, funnily enough) that doesn't size properly on Safari[2]. It squishes horizontally, I assume due to some weird CSS implementation difference.

[1] https://www.thunderbird.net/en-CA/

[2] https://i.postimg.cc/rwWmDynk/image.png


Despite the extended verbal slap down, it's likely Google will ignore most of this feedback and ship anyway. That way, youtube.com, google.com, and double-click.net will be able to freely share third party cookies amongst themselves in Chrome even after Chrome supposedly removes third-party cookies.


Doesn't seem that different from how the web works currently, just less hacky then existing solutions. Am i missing something?

Its not like the status quo is that two separate domains run by same entity can't talk to each other.


Two separate domains run by the same entity cannot share cookies, which is what this proposal is mainly about.


Depending on how we define 'owned' and 'entity', Cloudflare could represent many many domains. Very anti-consumer, a situation where the biggest players win by buying up domains.


Assuming a domain name as a hard boundary for information is a disaster, imho. All of our fights over "1st party" and "3rd party" based on arbitrary names for a service is just specious, and the continued emphasis of "origin" as a "load-bearing" pillar is only due to legacy, not due to utility. The First Party Set proposal reflects a real need of enterprises and services, though the edge cases of the current solution, and complaints about the proposal, really reflect the competing demands of those who want freeflow of information vs. those fearing an expansion of the privacy violations bad actors have created. Because these extremes exist, these edge cases, the whole proposal should get shut down?

We've evolved into this silliness. Even as we see domains fading from view (address bars are really just search bars, who actually types a full domain anymore?), we are now forced to have multiple domains to reflect "relevance" of our content, thanks to SEO, naming conventions, etc. A company may have 6-10 domains, some reflecting regional customs, some reflecting a specific content focus, and some for specific segments of their customer base. While no company or publisher really _wants_ to manage all these domains (and certificates and payments), it's often forced by external requirements. First Party Sets nicely solved for some of these problems, by allowing entities to treat these various names as one surface for their own use, while still restricting external entities from access in client, allowing SEO to function as expected, etc.

Yes, some companies have managed to keep almost everything under one domain (apple.com), and some appear to not worry about how many domains they manage, as they all seem to get search relevance and share information (google.com)... but in the real world, we are now in a place where, at a certain scale, multiple domains are often helpful, and in some cases, necessary.

The First Party Sets proposal gave companies the ability to treat their interactions consistently no matter the domain name, which is exactly what any user would expect, and it also protected privacy, by revealing that multiple entities are under one roof (in case you didn't know that Verizon owns AOL sites like TechCrunch). Users shouldn't have to care what domain name they are on, it's the entity that matters, both for good (happy to not have to login AGAIN) and bad (What? These jerks control this site? I'm outta here!).

The comments in the proposal highlight some edge cases where bad actors could permeate privacy protections, but a) they are solvable with revisions, and b) highlight the bluntness of many of our current attempts at privacy protection via "domains".

I expect some aspect of "cross-domain" entity identification will continue to be proposed, and I expect the browsers will add it, even if W3C doesn't accept it as a standard. This corner we've created has some benefits, and controls on rampant data-spewing are welcome. But we don't need to keep building on every legacy aspect of the web: Flash is gone. We don't use the <blink> tag. We don't use RealAudio for streaming anymore. And we don't need to assume that a domain should be a boundary for the entity owning it.

(minor edit: I really need to do better with where I put commas)


> address bars are really just search bars, who actually types a full domain anymore?

recently found myself pasting an (local) ip(v4) address in the address-bar and being taken to a search engine - strange times


I have a reverse problem. My platform has multiple forums represented as domain/forumname, each owned by different users. Now because most integrations go off domain, it is impossible for individual forum owners to plug-in 3rd party services (analytics, ad networks etc.) for themselves.


I'm not sure why this would be such a big deal for Google.. I know they might be communicating between .co.uk <-> .com etc.. But apart from that, why would Google benefit so much from this ?


It's not just google.com and google.co.uk, but also youtube.com, gmail.com, doubleclick.com, etc. Starting to see the bigger picture?


Ah okay, I see.. If anything I see the potential for more domain certifiation fraud, expired domains still being attatched somhow o an entity.. but makes sense now why they would push for it.


It's Google's proposal, but it actually affects any entity having multiple domains. Think of publishers like Conde who have multiple magazines, or a company with regional domains (company.co.uk, company.com) or a content site set up for specific company-driven event (thecoolshow.com)... All of these would enable the entity to manage user experiences better, instead of having to do hacks, multiple logins, or other user hassles.


doubleclick.com is Google's ad platform.


I thought you have the new Storage Access API for sharing (third-party) cookies between domains? Why can't they use that?


This is a massively misleading headline.

First the issue is some recent changes by browser vendors that break the semantics of the web in the name of privacy:

What safari did was to ban cookie loading in framed origins if they weren't sub-origins of the parent. Basically they are forcing a frame-ancestors policy that is impossible to opt-out of, so if you are running a mashup, say on foomarket.com and you want to have different vendors vendorA.com, vendorB.com each have an iframe, then this will no longer work in safari as vendorA.com iframe wont send cookies back to vendorA if the site is framed.

The justification for this was the invented term of calling vendorA's cookies "third party", even though a framed site has its own window, it's own document and is a first party origin in the browser. Safari then decided if the framing origin is not a parent of the framed origin, then the framed origin will be labelled "third party" and it will not be able to send cookies back to its own origin. In other words, they crippled iframes, breaking websites that rely on iframes and the same origin policy for content isolation, especially for content isolation of authenticated domains. This is used in mashups for dashboards, vendors, multi-tenant pages, etc.

What Google is trying to do is to have a list of origins that the framing page can publish for which web-semantics would be preserved and iframes could do all the things we expect iframes to be able to do: run scripts, load cookies, etc, respecting the same origin policy of the frame rather than having the origin of the parent applied to it. This is iframe behavior since iframes were created up until 2019 when Safari redefined iframes to not be first party origins in the browser. Google's proposal is a completely reasonable solution that restores the viability of iframes as content isolation security mechanisms - presently the only content isolation security mechanism available in the browser that allows two origins to coexist without interfering with each other in the same browser tab.

The alternative is to force vendorA and vendorB to become subdomains of foomarket, so they would be vendorA.foomarket.com, etc. In the case of subdomains, apple does allow the cookies to be sent. But then you have cookie overwriting attacks in which vendorA could attack vendorB by setting cookies on the parent which would be read by all the children.

Thus sub-domains do not have the clean isolation of full-fledged separate origins, which is what iframes were designed to be, and which privacy advocates at Safari decided needed breaking because they saw that a lot of public sites used iframes for advertising. But that is not a good reason to destroy the semantics of iframes as there are many sites on the internet today that have nothing to do with advertising, and which rely on these features for secure isolation. Indeed the obsession of the Safari team with fighting advertising at the expense of breaking existing corporate intranets, PAAS/SAAS offering and mashups involving multiple tenants hosted on the same parent origin is pretty stunning. You'd think their only experience with the web was browsing public ad-based sites, and that this was the only use case they designed their browser to serve. You want to have a dashboard with different data providers serving their own data in their own isolated origin? But you don't want those data providers to send data anonymously but only if you are authenticated to their origin? Too bad, Safari thinks you are the Buzzfeed front page with one pixel trackers and that is the only use case they are designing their browser to support. You want to have a site studio where you load a website you are building in a frame and have it still run properly, without being able to interfere with your studio? Too bad. You want a municipal site where city vendors or authorities frame in various monitoring/reporting sites in a central command center, all framed in but with strong site isolation? Too bad. Want a content origin for html content to render but have it not be able to affect your own origin? Too bad.

There are many business sites, secured sites, multi-vendor sites that have nothing to do with advertising and rely on iframes for security isolation of authenticated data, and these sites no longer work with safari. Google is trying to make sure that these sites still work. And the Register -- another advertising based site -- is enraged by this.

HTML is no longer an advertising-first technology, and breaking existing frame-origin semantics in order to wage a war on advertising is not going to fly, regardless of the army of privacy warriors who have no interest in supporting more advanced use cases beyond front-page blogs. What will happen is we'll go back to the old days of all non-advertising sites requiring Chrome to work, much like corporate/government sites were built in IE in the 90s. And the sad thing is that this is because the other browser vendors are intentionally breaking existing web-semantics, not because Chrome is embracing/extending with new semantics.


"One Cookie to rule them all, One Cookie to find them, One Cookie to bring them all and in the domain bind them. In the Land of Google where the Browsers lie [to you]."

From "Don't be evil." to "Are we the baddies ?" in less than a decade - most impressive !


How did w3c get such power to “slap down” google.

If users can’t access gmail / YouTube etc, they will be ignoring the w3c.

Another workaround if google can’t try this in chrome (it can) do cookies on a google sub domain (yt.google.com etc)


> If users can’t access gmail / YouTube etc, they will be ignoring the w3c.

Are users unable to access gmail / YouTube etc now?


Users will use / do what is required to access YT. Google could proxy through a google.com domain so same site cookies could be set, they could implement this in chrome and users would follow along


Once again: are users unable to access those sites now?

Why would they not be able to access them if Google doesn't implement FPS?


Free products are generally supported by advertising. We are being told the W3C has “slapped down” googles attempts around FPS.

I am making the point that if google comes up with a workaround to this “slap down” - users will follow - even if you and others are yelling “it’s not W3C”. That could be modifying chrome to allow this, it could be hosting things under one domain etc


> Free products are generally supported by advertising. We are being told the W3C has “slapped down” googles attempts around FPS.

You were talking about "If users can’t access gmail / YouTube etc, they will be ignoring the w3c". Now it's suddenly "free sites" and "advertising".

And don't worry, advertising isn't going anywhere.

> I am making the point that if google comes up with a workaround to this “slap down” - users will follow

They will not come up with a "workaround", they will just implement it ignoring any objections.

However, that's not the point. What was it about users not being able to access youtube etc.?


Gmail and youtube are offered free to users.

They are supported by advertising.

This is not 'suddenly 'free sites' and 'advertising'" - this is how both of these sites have been from the beginning, adding paid options later.

If the w3c somehow forced users to jump through hoops to allow google to advertise to them so they could access their youtube and their Gmail, they would jump through those hoops rather than lose access.

That said, I'm not sure I even believe that W3C has actually been able to "slap down" Google - but we will see. Google, not the W3C makes the BY FAR most popular browser out there, and Microsoft has recently begun migrating ITS own users to, not away from chrome.

Because we are having some definitional issues around the basics of how gmail and youtube function in terms of revenue models and user engagement I'm going to let this rest here on my side.


> If the w3c somehow forced users to jump through hoops to allow google to advertise to them

Why would users need to jump through any hoops to make advertising works? Users will do literally nothing.

You are under false impression that not implementing FPS will somehow prevent advertising from working. No, it won't.

> Because we are having some definitional issues around the basics of how gmail and youtube function in terms of revenue models and user engagement

No idea what you mean by this statement.

The only reason Google wants FPS is to have an easier way to continue tracking users across its properties. It already does that now without FPS, and, surprising no one except you, it doesn't hurt its advertising business in the least. They brought in 147 billion dollars in revenue from ads in 2020. That's 80% of Google's total revenue.

Google and Google's free services will be just fine without FPS.


Users can't ignore W3C. Browser makers can.

This "feature" would open up for tracking using today's modern analysis systems with year 2000 privacy protections.


I'm not sure I understand. In the year 2000, we let any domain get access to it's cookie. In this proposal, an entity can let domains it controls have access to a shared cookie, all others are still blocked. But fair point: if I am allowed to say that I have a "relationship" with eviltracker.com, then they could get the shared cookie... But it's easy to add requirements like ownership vs partnership, or purpose-based access, or other enhancements.

So, not really as bad as you suggest, but true, some problem cases can arise.


> But it's easy to add requirements like ownership vs partnership

Ah, yes. "Easy".

How exactly do you envision that?

- Is ownership/partnership info easily and immediately accessible?

- Who is going to verify ownership/partnership info?

- How is that applicable to First Party Sets?

- LVMH owns at least 75 different brands. Even though it owns them, for all intents and purposes these are different legal entities and (especially under GDPR, CCPA and any other privacy law in existence) they cannot be treated as a single entity.

- amazon.com and amazon.de are two separate legal entities with wildly different privacy requirements. Same goes to subdivisions in each of the 75 brands I mentioned for LVMH.

and so on.

Yes. Easy.


And google hosting everything under a google.com domain wouldn’t?


But they don't and it wouldn't be an easy step.


Huh? Amazon put prime under amazon.com. The domain doesn't mean you have to use the same servers, google already just proxy and load balances behind their existing domains. They could easily redirect youtube.com to yt.google.com, or serve assets on youtube.com from yt.google.com if they want to have cross domain tracking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: