That was also the first thing on my mind. And I almost lost respect
for Sir Tim for giving tacit assent.
I would like to hear more from people connected with W3C over the
years as to why it ran into the long grass. Some commentators have
even described it as "irrelevant today". I'd like to think
that's wrong and W3C has an important future keeping a public
and accessible World Wide Web alive.
Among the many questions in my head:
What is the "Web" today, 30 years on?
What can the W3C realistically do to steward it?
Have principles of universality and accessibility been abandoned?
Can there be a WORLD WIDE Web in the age of state firewalls and the
"splinternet"?
Does the W3C have overlays and censorship resistance on its roadmap?
Does the W3C recognise the need for Small Internet technologies,
Gemini and suchlike as a reliable, accessible and simple information
delivery network?
Refusing to spec it would have just given us another two decades of Flash-equivalent hell. Developers want features, and you can either provide a road-map for making them cross-compatible and specified or you can refrain from doing so (at which point developers will do whatever, and then some private company will provide a closed-source solution, every browser will support it because it provides a service end-users want, and instead of any piece of the story being open the whole story will be closed and it'll be a quarter century before something standardized replaces it well enough to uproot the de-facto closed standard).
> Refusing to spec it would have just given us another two decades of Flash-equivalent hell.
That's pretty much what we got with EME, since the technology relies on proprietary plugins. The only difference is that EME got a pointless stamp of approval from the W3C as supposedly a part of the "open" Web, even though there's nothing open about DRM-encumbered media.
Widevine is currently in the Chrome, Brave, and Firefox browsers. What are the odds we'd have gotten that (instead of a Chrome DRM solution, Firefox DRM solution, and Brave DRM solution) without the EME standard?
The w3c spec can be expected to put as small a subset of the client stack as possible into the black box, whereas a proprietary API would try to pull in as much as it can, aiming to eventually usurp the entire client. Remember silverlight?
Right. I think more likely than not Chrome would have implemented their own extension and it would have become the de-facto standard. Maybe Apple would have resisted and we would have two solutions.
If browser vendors want to create their own DRM extensions, then that's on them. But it seems to me that the W3C went too far in including DRM capability within the standards for the web.
The end result may be comparable, but I think the principle of it is important.
It's on them and on web developers, who then have to deal with a balkanized client space without even the courtesy of a standard to allow the page to detect whether a given decryptor is available.
Would the web have survived as a "popular" destination without EME?
I imagine that the w3c saw that large corporations had most of the attention of most netizens. Those large content producers/distributers want DRM. Did the w3c fear that if they hadn't accepted EME, most of those netizens would move off the web and onto the native apps which the video streaming companies would, inevitably, have produced?
I say "inevitably" because most of them stream to native and web anyway these days, and in many cases the native apps are wrappers around HTML5 players. In a way, the w3c's acceptance of EME allowed its reach to extend to more devices, where if they hadn't taken EME on the web may have shrunk.
(I'm kinda just thinking out loud here. Sorry for the rambling)
Streaming is a tiny portion of what the web is used for.
Even so, entertainment should be... entertaining. I got free Paramount Plus with my phone service. It has DRM, adblocker-detectors, and all sorts of other nonsense to where it usually doesn't play videos. I went with Youtube over Star Trek. That's not an ideological choice; it's just not entertaining to fight with computers to play a video or to talk to support.
I suspect the effect would have been the opposite: a more rapid decline of the major content producers. This stuff needs to be easy and to work. Netflix did that, before everyone started to jump ship. Napster did it well too.
At some point, there's a spiral, where:
- Declining usability / quality leads to declining viewership
- Declining viewership leads to declining budget
- Declining budget lead to declining usability / quality and more pressure on monetization
... and so on. That's the disruption S-curve. In retrospect, I'm guessing that would have happened if large content producers forced apps.
But video is probably the most data-intensive thing most people interact with. How many webpages, books, songs, pdfs is equivalent to a 4 minute 1080p Youtube video?
At risk of sounding optimistic, I think EME has really bolstered adoption of the open and DRM-free BitTorrent protocol for media streaming. I didn't do any actual head-to-head comparisons, but movies that come out on services like Paramount+ also tend to get a simultaneous release on BitTorrent trackers, albeit for some reason it usually isn't mentioned in the trailer. Despite the lack of advertising, though, it really seems to work: on some recent movies, stats I've seen on trackers show very impressive results for the BitTorrent box office. One movie I looked at had been seeded possibly millions of times in just a few days.
So I am really thankful for EME for helping to support open standards in that regard.
Nope, I don't think EME is the reason piracy has another boom. Looking at music stats, while there is still a thriving piracy community most people now gave up and pay for music streaming services, and it's easy to see why. Most record companies and artists (reluctantly) accept that it'll be better if there's something rather than nothing and you could access the same catalogues from one app to the next (there are obscure asterisks but for 80-90% of people it's not a bother). On the other hand, studios are still deep in denial about that simple fact and have tried to slice-and-dice access to movies, which as you might have seen led to the revival of movie piracy.
To simplify, for music you just choose if you want the additional services offered by Spotify or Apple and not focusing on what catalogue it has (barring some obscure asterisks), while this is the polar opposite has happened on music industry and now you are either actively researching where goes what or just gave up and pirate. As Gabe Newell has said, give the consumers a better service and they'll use it. Record companies succumbed and quietly accept this, studios don't (and I'll wager that they will eventually accept to have a single service).
> I didn't do any actual head-to-head comparisons, but movies that come out on services like Paramount+ also tend to get a simultaneous release on BitTorrent trackers…
When you say "released", do you mean they're pirated almost immediately? Or is there now a legit way to get non-pirated content that I'm not aware of?
The EME spec is a useless junk. It describes a fictional architecture that no browser has implemented. It's not possible to use it for any actual DRM implementation — if it was, it wouldn't be DRM! The authors of the spec are on the record saying they would never use the implementation described in the spec.
The real specs that all the browsers and media vendors used for DRM are vendor-specific closely guarded secrets, as they have always been.
The only purpose the spec ever had was laundering DRM with W3C's reputation.
Because it's not relying on the EME spec. If you want to use Widevine, you have to sign an NDA with Google and use their proprietary spec for it. Exactly like you'd have if EME did not exist.
The EME spec only documents how to use a "Clear Key" scheme that is a complete trash defeated in 1 line of code, and of course not allowed by any website with DRM.
The EME spec says nothing about Widevine or any real-world DRM API. The CDM side, that does 99.99% of the actual work, is not documented in the EME spec. It's only a hypothetical construct alluded to in the spec, but not an actual API. The actual browser implementations don't even use the architecture outlined in the EME spec. Almost all of the EME spec is non-normative, or for the decoy Clear Key non-DRM.
EME is like a spec that says Adobe Flash is <object type="application/vnd.adobe.flash" />. There, Adobe Flash is now an open web standard. You have everything you need to handle .swf files without plug-ins, except the content playback module, which is a detail left for each vendor to figure out.
I get what you're saying about the implementation of the CDM internals not being documented, but to me - a MSE/EME player developer - it seems that the main chunk of the Encrypted Media Extension API does also apply when Widevine, PlayReady and so on are used.
The user-agent is still relying on the same interfaces and concepts (MediaKeySession, key id, same events, same methods, same way to persist licenses) and the application generally can work transparently without doing exceptions for each key system (well, in reality, there are a lot of compatibility bugs in a lot ot EME implementations, but those are not on purpose).
For example the server certificate concept generally does not apply to clear key implementations at all but is frequently used when using Widevine.
Clearkey is thoroughly documented because that is the only key system that SHOULD be implemented in all user agents with an EME implementation, though nobody, even the writers of that specification, think that it is a good idea for real production usage.
In my work we rely a lot on the EME specification, and refered to it many times in exchanges with our partners when we thought that their CDM implementations (e.g. on specific set-top boxes, smart tvs, game consoles etc.) lead to non-compliance to that spec (e.g. unusable keys not being properly announced etc.).
_PS: off-topic but I came across your projects and comments online many times, IIRC mostly about image encoders / decoders and on Rust (both subjects on which I love to read). That was always very interesting, thanks!_
The W.3.C. is already powerless when it made that move; it would be even more powerless if it didn't.
That situation truly opened my eyes to the fact that for most people, pragmatic arguments of the kind of “We cannot stop drugs anyway, so we better legalize it, so we can better regulate it.” really seem to only apply when they already agree and that the same people who say it in one case suddenly are staunchly against such pragmaticism when it no longer fly their way.
d.r.m. would be there regardless of what the W.3.C. would do and at the time the lesser of two evils was certainly the W.3.C. having control over the specification to some degree.
I didn't know actually the E.F.F. had such a position, I felt it was the last agent in internet transparency and freedom that hadn't gone insane. It's a sad thing to read but I won't be donating to it any more if such is it's modus operandī.
I do have respect for TBL's achievements, but honestly I don't see the purpose of the W3C, and also have never seen the justification for burning public money into W3C's pay-as-you-go org, before or after this change.
Let's take a look at what W3C did lately, shall we?
W3C HTML 5.2 was published in 2017, based on WHATWG material. W3C HTML 5.3 was "published" as merely a commit vs WHATWG's HTML repo with a promise to keep further sanctioned snapshots, which didn't happen though. How could a snapshot of WHATWG's phonebook-sized "living standard" thing be consistent and deserving of the "standard" badge anyway?
CSS being such a crap specification, and absurdly outside its scope, the only reason of its existence seems to be an organizational stalemate, where Google let W3C have some token significance in web specs and give long-term CSS editors job security. At the cost of giving up any mental discipline, and even versioning, with the goal of encumbering the web to the point of no return.
The state of web browsers is so abysmal, it could hardly be any worse. That situation should've been avoided by a competent real standardization body. But TBL is busy with completely irrelevant stuff such as XML and RDF.
Sorry TBL, you might be a nice guy and tinkerer, but a manager you're not.
Time to take standardization to ISO or another "real" standardization body.
> Time to take standardization to ISO or another "real" standardization body.
If you want to see how the web could be worse, do this.
The ISO process is openly hostile to the development of royalty-free standards. You can direct a lot of criticism towards the W3C, but they got the patent policy right.
And almost every time I read anything from there, I just… Ugh. I don't even know where to start with describing the extent of just how much of a shitshow ISO represents.
Edit, perhaps like this:
After for many years intensely thinking about the organizational problems involved, I've come to a conclusion which seems rather very absurd to me:
ISO could only deteriorate to this degree because of two things:
1. An absurd number of decades-long-standing bugs in Microsoft Word's Master Documents feature: http://www.addbalance.com/word/MasterDocHudson.htm.
This tacitly yet tremendously increases the organisational costs of writing policy, process documentation, work instructions, &c. documents in a truly fully standardized manner (don't even get me started about ISO authoring tools, especially not about their internal ones. Microsoft Word Wizard .wiz files. I almost fell from my seat when I originally learned of that.); many of those working in pure IT organisations fail to realize this because there exist a lot of workarounds implementable when one has a large tech-affine workforce, but these workarounds don't work in large traditional enterprises. Some of those have invested in workarounds—most haven't, think they couldn't, and thus won't.
2. A feature parity gap in the file save dialog behaviors of Microsoft Excel (Filename suggestion doesN'T get pre-populated based on Document Title file metadata field) when compared to those of Microsoft Word (Filename suggestion DOES get pre-populated based on Document Title file metadata field). This makes semverdoc.org enforcement impossible in large organisations still stuck in e-mail–as–version–control–for–Microsoft–Excel–land, i.e. the majority of them, and There no replacement exists for Microsoft Excel in most large enterprises.
Everything else seems like it just mostly comes down to sociopaths hiding behind cluelessly middle-managed misunderstandings.
And the really sad thing:
If said sociopaths really grok'd all of this, they'd probably do their damnest to undo this middle management incompetency compensation competence failure, because while it might seem like this ends up profiting /someone/, it actually does seem like it actually represents one of those rather very rare true loss–loss–loss scenarios.
Come to think of it, see also:
The subtle horror of IBM's Zachman Framework, as well as technical writing skills:
I personally find the "Living Standards" from WHATWG difficult to use. Specifically keeping track of what has changed with things like what new HTML elements have been added. With the versioned standards, I can work towards supporting that and I know that the standard is in a stable state with known backward compatibility guarantees.
How is CSS not versioned? CSS may be complex and have a lot of baggage, but that is what happens with any language that grows and evolves over time. I don't see how that relates to giving CSS editors job security, especially given that they are doing a lot to provide modern and advanced capabilities in a way that is implementable and addresses users needs.
XML and RDF may be irrelevant to you, but they are not irrelevant to everyone.
I'm surprised the w3c wasn't already a non-profit, like this didn't come up 20 years ago when serious things with HTML were being discussed and battled over with various browser war entities in the mix etc? hm
W3C is jointly hosted by the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) in the United States, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France, Keio University in Japan and Beihang University in China.
For comparison, can we look at what portion of MIT's research grants are Defense related? MIT being the US host.
I wouldn't be surprised if it's similar, but I haven't found it lazily googling.
Most academic CS and IT research in the US is defense-funded, has been all along. Most other countries too. So any reasonable academic home of a web-related entity is likely to have it's research budget be mostly defense funded.
When you ask people about The Internet, the parts they describe were mostly funded by the NSF and venture capital. DARPA has been largely out of the picture for a long, long time.
Thanks for the details. Department of Defence is listed at 18% MIT-wide, which is not close to 60%, true. It is the #1 source of research funds.
Curious what the Department of Energy grants are about, and what overlap it ends up having with defense purposes if it's nuclear. That's the second biggest source of research funding at 15%. 18+15==33%, still not 60%.
> That also is false. It's nowhere close to "most" across all US CS and IT research.
I'd be curious to see numbers for that too, for MIT CS department(s) specifically, and in general in the US. I have definitely known a lot of CS professors and projects with DARPA and other DoD funding, it is quite common.
Wow, the total amount of "research expenditures" "on campus" in the earlier chart was only $447 million.
And here we have a $9.6 billion contract (albeit not for just one year, i'm sure) that wasn't included... it does seem like that earlier chart was not very comprehensive.
Why is MIT? Do you not have an issue with them also getting funding from the American defense department? Or if you're concerned with the open web, do you not remember when they persecuted Aaron Swartz for the heinous crime of downloading knowledge?
> You're going really hard in the paint to defend Chinese genocide of Muslim Turkic people.
Where did I do so? Please either provide a quote of me doing so, or retract your statement as it otherwise amounts to libel.
To clarify if somehow you managed to misinterpret my comments: I am in no way defending any $bad_shit that China does/has done, I am pointing out that the other nations involved have likewise done so, and therefore focusing on just one nation is disingenous because it gives all the others a pass.
The war on the free and open web has been waged just as much, if not more so, by Western powers as it has by Eastern ones.
> or retract your statement as it otherwise amounts to libel.
You may not be very familiar with Hacker News. The parent issued an opinion about your behavior in commenting. Commanding them to retract it or else, isn't going to work here.
More than the military budget, the reluctance to let the population access an unfiltered internet is problematic. The CCP is trying to create a separate network from the global one, this raises many questions.
18% of the world population, but what is relevant is the % of web users. This country denies its citizen the benefits of an open web and treats the ability to route around censorship like a bug to fix. Why give someone a seat at a committee steering something they openly want to sabotage?
https://www.eff.org/deeplinks/2017/09/open-letter-w3c-direct...