The W3C has endangered long-term resilience on the Web by supporting misguided efforts like EME, which aim to add a requirement for de-facto proprietary "plugins" to the open Web platform. Why should they deserve our support?
The W3C has no control over whether browsers implement support for DRM, they just wrote a document that says if you do the API should look like this. It was going to happen regardless.
The actual argument is that the W3C has absolutely zero teeth because they couldn’t stop EME even if they wanted to and therefore are irrelevant compared to WHATWG.
There's more to it. The EME API is useless without the undocumented proprietary plug-in side (called CDM in the spec). It has no technical purpose. It's only to "standards-wash" an entirely closed DRM by Google (and everyone else's proprietary DRMs).
The spec contains diagrams and descriptions that have been acknowledged by its authors to be factually incorrect. EME pretends to be an in-browser thing, rather than hardware+kernel "hard" DRM. The spec proponents stated that they'll never use the scheme in the spec, and the "hard" DRM is the key feature they're after.
There have been a lot of process shenanigans: e.g. during likely the biggest disagreement in the history of W3C, the chair of the HTML WG announced that there is a consensus in the group about EME, and it can proceed further. Then the EME part has been moved out of public HTML WG to a closed-doors group.
So it wasn't merely Google+Netflix saying "we'll do it anyway". It was a subversion and corruption of the W3C itself.
If I need to build an HTML parser in a world with proprietary CDM, I sure as hell prefer that CDM to declare itself in a standardized way than to have my parser need to handle non-standardized content declarations. Having a standard benefits even user agents that don't plan to support the feature.
It doesn't work as you imagine. You can't use the EME spec for anything. It is equivalent to saying Flash is a Web Standard, because the spec says you invoke it with <object type="flash"> with no further details (which is less than NPAPI that actual Flash used to use).
The spec, both for browser developers and site authors, is completely impossible to use without a secret unspecified component. EME CDM isn't even a real component, but a spec placeholder for arbitrary vendor-specific code that has no standard API, and intentionally never will.
That secret component is for all practical purposes absolutely necessary and implements 99% of the functionality. The only key exchange scheme described in EME is a deliberate misdirection, and it's not used by anyone.
I can't emphasize enough how sleazy EME is. Google and Netflix have devised and documented a key exchange scheme nobody asked for, nobody uses, and even they have explicitly said they will never use it. The only purpose of this spec is to merely exist, so that DRM vendors like Google can exploit the confusion to say their closed proprietary DRM, which is not in the spec, and doesn't even work the way spec describes DRMs, is somehow a standard.
(I was an Invited Expert in W3C HTML Working Group when this spec was being written)
Let's begin with a polite thank you for your service, a hot drink, maybe some type of certificate of acknowledgment that says like "you were present." and then call it day.
But beyond that, in my opinion, W3C has been a disaster since day 1. It seems like some people with good intentions decided one day they could just play RFC roulette and maybe if they slipped enough nonsense into their content that nobody would notice, and we would just all play along and build the misshapen web they were imagining.
Point is that you have to be incredibly careful about allowing new complexity. Now we have reached the situation that implementing a new browser has become possible only for very large companies. This would not have been necessary if the web was built using a more layered approach.
That's too reductionist. There's value in having a media-player-like app of limited complexity and purpose, with a wealth of server-side platforms to deliver content.
Maybe if they had met us halfway and actually shown us the web they were imagining somewhere somehow it might have been easier to get on board, but whenever I tried to look at any of their standards, it was just endless detail with no explanation of why anybody would want to do any of this.
Large players come and go. Chrome is at the head now because MS rested on their laurels and IE's performance rotted enough for a new player to take them on in market-share.
I don't particularly see a new large player come in the browser space, though. Creating a browser from scratch is a huge untertaking and the fact that everyone except Google, Mozilla, and Apply have given up is testament to that.
Edge (Legacy) was a decent browser, but keeping up even with still missing Web APIs proved too expensive (alongside fixing bugs). Safari may be in the same pickle these days with apparently a sizeable backlog of missing features and incompatible implementations. Granted, that may also be due to many developers assuming Chrome to be a reference implementation.
If a new player comes along I'd guess it will be based on Blink, but they would probably need more resources than Google can throw at Chrome. And a way to finance them ... there's probably too few people willing to pay for a browser these days.
What is the relationship between W3C and WHATWG these days? I haven't been following that story for years now -- at the time it seemed that W3C was losing relevance, and that WHATWG was becoming the de facto standards keeper for web tech.
There's a very interesting long-form (somewhat chronologically organized) answer to your question from the horse's mouth here https://wiki.whatwg.org/wiki/W3C
>What is the relationship between W3C and WHATWG these days?
IIRC a few years ago W3C essentially handed over web standards to WHATWG, with the thinking being that it wasn't helpful to have 2 competing standards.
W3C had released redacted snapshots (HTML 5, 5.1, and 5.2) of WHATWG's so-called "living standard", then published HTML 5.3 as a reference to a specific WHATWG commit and for a short time promised more supposedly at least qa'd snapshots, then last year finally gave up and just linked to WHATWG's head [1]. SVG2 didn't go anywhere either due to lack of interest of "browser vendors". MathML hasn't been updated in many years AFAIK. So that leaves ARIA and CSS.
Whatever W3C does and with due respect to TBL, as a self-proclaimed standardization body they've failed spectacularly to keep "browsers vendors" at bay and the web from being monopolized, and I think they should disband, if only to demonstrate to the world that the web isn't "standardized" in any meaningful sense of that word. Even HNers frequently have illusions about "web standards".
While on the way out, they might attempt to deliver CSS specs actually useful for developing browsers. Or maybe they want to venture into developing a browser themselves using their funds?
I know they do standards work on top of web protocols these days. Those digital vaccination records were based on W3C’s Verifiable Credentials standard. (The US ones anyway; I’m not sure about those used elsewhere in the world.)
Right, but that’s not the only thing they do that’s used on the web. They also own and run XML, security specs like CSP and SRI, the WCAG accessibility guidelines, Web Payments, WebAuth, WebRTC, etc etc.
They also do non-web stuff that happens to use web standards, for example ePub3.
This is going to come off as glib but the W3C has no relevance anymore, it exists to give the WHATWG the stamp of approval on it's RFCs. It's like the grandfather you give deference to but hasn't been relevant in years.
They may still "control" css and xml (not sure the exhaustive list) but I don't understand why those haven't been moved to WHATWG as well.
Just trying to read the article I was rejected from 3 IP addresses by
their junk-yard dog "Wordfence". And I didn't like the overly upbeat
tone of it. So yes, the statement seems full of parochial
self-congratulation for those "in the club".
Please, the "ultimate tool of resilience" was mRNA and the
extraordinary efforts of biologists. It was hospital staff who died
working 18 hour days without PPE.
This narrative that "technology saved us" in the pandemic is cute, but
also puff and bluster.
We're still dealing with the equal and opposite effect of web
technologies in spreading anti-vac messages and conspiracy theory.
The winners were Google and Amazon. Not "the web".
The paragraph "A web for everyone" pays lip service to acknowledging
that "an estimated 37 percent of the world’s population lacks web
access". Yet effectively shrugging responsibility for stewardship onto
Google, a for-profit company bent on eliminating privacy and dignity
online, isn't a good look for W3C.
It hides another reality, the one in which the pandemic showed our
normative reliance on Web technology is itself a form of irresilience.
Our world is precarious in the absence of other parallel and
supporting technologies, of offline, radio, good public transport,
functional (healthy not overcrowded) physical community spaces,
reliable mail service, stable electricity supply and much more.
There's a reality in which some people with unequal access to digital
technology suffered more than if other social measures had been taken.
Those with older phones and computers, the wrong browser, not enough
money for bandwidth and those in rural areas were all marginalised.
Those who value privacy and don't wish to lay their lives at the feet
of a few giant tech corporations.
The "Web saved us from the pandemic" story makes it easier and more
acceptable to exclude marginal groups within our own society.
Why? How present was covid in the US, UK, Australia, Canada, Ireland, New Zealand ("the anglosphere") in March 2022? It was still precautionary "Let's see if we stop it getting in" thoughts in those countries at those times. Large numbers of cases were still mostly in Italy and China at that stage. And China had only recognised it as a pandemic two months earlier, so I don't see any real complaint to be had of classifying 26 months as "two years", even if you want to focus on when it started rather than when it became a global pandemic.
There are ample doubts about the actual start of the spread. Even beyond that, two countries in very different continents showed large clusters, it became pretty clear that it had spread everywhere - we just had no real visibility of it, and political figures (particularly in the anglo world) were just clutching at straws to deny it. This was well before the last week of March.
The web architecture made it trivial to manipulate information and cancel people. Nothing in the protocols deals with archiving or true redundancy. Nothing deals with DDoS or bypassing censorship. The architecture itself encourages centralization. Moreover, the protocols are currently being manipulated to encourage even more central control.
We're way past the point where expressing sentiments like "this is for everyone" is aspirational. Right now it's merely out of touch and tone-deaf.
Are you complaining that there is no way to "bypass censorship" on Facebook? Because it is pretty trivial to host your own site. The web has absolutely delivered on that promise; just see how the efforts to shut down The Pirate Bay have failed.
Yeah what you are running into is the difference between the web, which is already dead and rotting, versus the internet, which is quite resilient and is in the process of eating the web.
Back in the early days, people used to say "the internet interprets censorship as damage and routes around it". That may have even been true back then. Definitely not true any more, at least not of the internet as we know it.
It's still true, for the internet at least; the problem is that the other layers we've since built on top of the internet (large, centralized web services) do not share that virtue.