I know Google provided a free hardware decode block for VP8, and I assume also for VP9 (since there appears to have been some, though not universal, hardware acceleration for that according the the Wikipedia article). I'm hoping they'll soon (or already) have the same for VP1.
I see Apple, Nvidia, Intel, Allegro, AMD, and a bunch of other silicon vendors are also members of the Alliance for Open Media, so I expect to see hardware acceleration within the next 2-3 years. I'll be impressed if it's earlier.
Edit: maybe hardware acceleration earlier. If the standard was released recently with the collaboration of the silicon industry, it's likely the current in-design/early-production silicon already has an implementation of the important parts of the codec, so we can expect consumer products with at least partial acceleration within 6 months.
For what it's worth, Intel shipped "hybrid" implementations of HEVC and VP9 for older products where the CPU does some work but the GPU's shader cores do shader-y bits (https://www.anandtech.com/show/10610/intel-announces-7th-gen...). I don't know how far that approach gets you, especially in mobile settings where you really care about power use. But you might see that before complete hardware decoders everywhere.
A use case that might be interesting before full hardware acceleration is to provide an improved still image format. The graph in the post is a great demonstration that improving video encoding and improving image encoding are closely related these days. AV1 has backing and went through IP review that may help it avoid being mired in a patent mess like previous proposals to replace JPEG. (Apple is using .heic, based on HEVC intra coding. Android'll support .heic in the next major version, but the HEVC patent situation will likely constrain how many external tools, platforms, etc. pick it up.) There's work on defining an AVIF format (https://github.com/AOMediaCodec/av1-avif) and tech demos look great (https://people.xiph.org/~tdaede/av1stilldemo/). So, maybe fewer ugly JPEG artifacts in our future, partly thanks to that deringing filter mentioned at the end of Monty's post.
(I said more or less the stuff about stills above, with a couple more details and links, at https://news.ycombinator.com/item?id=16699000)
A mobile device needs to be able to hardware-encode it as well - countless hours of 4k/60 video are taken on phones.
The bitrate used to encode real-time captures is generally two or more times greater than the bitrate you want to distribute.
Nobody edits at that bitrate, compression is concattenated multiple times. How a codec copes with real world multi-compression is key.
Of course once mastered the video gets heavily compressed to be sent to the consumers.
Even 12G sdi is compressed - we consider 4:2:2 and 10bit uncompressed, but 4:4:4 is less compressed than 4:2:2. Once you go into interlace or stupid things like 24p you're throwing away tons of data.
2022-6 seems to be the transmission method of choice so far, 2110 may well take over, but there are significant compatibility issues.
Personally I'd prefer everything originated as 4:4:4 10 bit 150fps (no drop frames), resolution I'm less concerned with.
I work in news though, broadcast quality is what we broadcast. 5mbit h264 PPP with a vbv of 200k is fine for a talking head internationally. 5 times that is plenty for an actual broadcast program with astons and stuff.
Interesting. Not even as a final rendering step after everything has been done in a compressed "preview mode"?
You can tell that they do if you read the scene standards.
They care much much more about this than most people do. Even a lot of people that work with video professionally don’t care as much about codecs and codec support as these guys do.
So no, I don’t believe that “real” scene groups would start using a new codec unless it was widely supported. The scene standards, then, would not include such codecs. Failure to follow scene standards will have your releases nuked in the scene.
As for the torrent sites, if someone puts up videos using codecs that cause the videos to stutter or plain fail to play then the torrent users will learn to associate whoever encoded those videos with videos that don’t work and they will avoid torrents with that person or group name in them.
avi and mkv can contain nearly any audio and video streams regardless of codec. mp4 can contain only a specific set of audio and video streams determined by a limited set of codecs.
You can trivially copy the audio and video streams directly over into an mp4 container provided that the codecs used are supported by mp4. As long as the codecs were supported also by the device you were looking to watch it on then you’d be able to watch it. So the codecs are the most important bit.
If I remember correctly, aac + h264 was commonly used for audio and video respectively, both of which are supported by mp4.
Of course it would still be an annoying extra step for someone who needed the file to be mp4, but at least being able to simply copy the streams is fast and cheap compared to streams that need to be transcoded. Transcoding could take painfully much time and could also easily further degrade the quality of the streams.
An important thing to note about mp4 is that in some configurations, metadata that is needed for playback is placed at the end of the file. This means you can’t playback this sort of file directly until you had downloaded the whole file completely. Often there would be a separate sample file that you could download to have a look at the quality anyway but still, it is one reason that you might want to go with avi or mkv instead as I don’t think those containers suffer from that problem at all. So if you download mkv and preview it early in let’s say VLC and you know from the nfo that they are using mp4-compatible codecs then you can both always have preview to ensure quality is good (or even that it is at all the movie you are looking for), and then after you’ve downloaded it you can copy the streams over in an mp4 container quickly and then consume the movie on your mp4 media device.
MKV has builtin multiple-language-tracks, chapter and subtitle support (MP4 support is non-standard and incompatible), so if a release needed that MKV was used instead of AVI.
Whenever I remux MKV->MP4, I lose chapters.
Technical superiority is more important than compatibility, especially when you have MPC-HC and VLC that are happy to play practically any format ever.
But yes, MKV supports virtually any codec, so that is a big plus. It comes at the expense of support on different platforms.
I've used MKV for lots of things, such as video production work, remuxing digital TV stream grabs so normal programs can play them, etc. It's the most reliable container format, from my point of view.
My MKVs typically have H264 video + AC3/DTS audio.
The data from YouTube or Netflix dwarfs what you'd get from a torrent distributor of a show or whatever.
Some pirate/niche markets may adopt a codec solely for some arbitrary reason, but those reasons (say, maximum compression, good open source tools) are normally orthogonal to what a streaming business usually needs (compression, but also hardware/software compatibility for all sorts of form factors, and commercial unencumberment).
It was far from clear 15 years ago when xiph released Theora (aka vp3, ancestor of av1) that going up against mighty MPEG, with their enormous war chest from a decade of royalties, and dozens of patents (and the money to defend them) that success in the end was even possible much less assured.
It's really a wonderful story of the triumph of open source philosophy over the IP licensing model.
The story here isn't that the Big Evil companies behind the MPEG codecs were beaten by the Good companies behind the Alliance for Open Media -- the story is that the pool of companies is kinda-sorta the same or comparable, with plenty of them playing both sides, a fact not lost on the founder of MPEG .
 http://blog.chiariglione.org/2018/01/28/  https://news.ycombinator.com/item?id=16261313
No, it's not monolithic inside, but the worst actors have solid control and it warps the behavior of the entire organization.
We saw that coming a long, long time ago and yet barely in time.
True but if we are comparing models for getting stuff invented collaboratively (open source vs. licensed IP) then this hardly misdirection. Indeed the fact that the same sort of actors found it possible/useful/necessary to go down the open route makes TD-Linux's point all the more interesting.
This is important because when policymakers talk among themselves they often just assume that IP and such are necessary for fostering innovation.
Would AV1 have been possible in 1993 when MPEG-1 came out? How would you have bankrolled the standard at the time?
No. There weren't enough transistors in the world. I'm not being silly. Something like Av1 happens now because Moore/Dennard allow it to.
I think it's widely understood that the technical innovations in sixth-generation codecs like AV1 were simply not computationally feasible on circa ~1993 devices, so I highly doubt that's what was meant. Therefore I'm not sure how transistor density relates to funding models for DSP innovation.
When Xiph.Org started (1994), even audio required high-end hardware. Yes, sound cards were available for PCs, but big enough hard disks were not.
When only the elites have access, the standards are made by elites.
So perhaps the cutoff year of 1993 is right before the threat of non-pooled patents was well-publicized.
There was a lot of video codec innovation back then too, but everything aside from the MPEG or ITU-T codecs was proprietary, and everyone took out patents. To Monty's point, the sheer number of endpoints capable of consuming digital video whose consumption somehow results in income for the publishers was just not quite there, making an alternate push for deriving revenue from DSP IP than patent licensing fees much less likely.
AV1 is basically funded by Google as a quasi-charitable endeavor, as far as I can tell. Whilst there may be several apparent funders like Mozilla, they in turn trace their funding to Google.
"Research funding by Google" is never going to be a tactic that policymakers take seriously, and rightly so, even if it happens to be doing a lot of good work in this particular time period.
Nothing charitable about it. They want to make money. So do a bunch of other companies. And they've realized the way to win that game is to relinquish control over the fundamental/infrastructural pieces.
That also has some beneficial aspects beyond 'a rising tide lifts all boats', but I feel more comfortable appealing to reliable motivations. The benefits to others aren't an accident, and they're important to e.g. us at Xiph, but let's not ascribe industry interest to anything more charitable than 'enlightened self-interest'.
It’s not an “open source philosophy over the IP licensing model.” It’s bankrolling a codec with content, device, and middleman revenues rather than content player revenues. The $_ dollars that Sony or Panasonic got for every DVD player through patent licensing has simply been replaced by the $_ that Google gets for every ad impression on YouTube or sale in the Google Play Store. It’s not a philosophical shift, merely a business model shift enabled by Internet video distribution.
They could have chosen to be much more focused on narrow self-interest, kept the technology proprietary, and played games with their competitors. I honestly believe they chose the high road because that's part of their culture.
And I don’t agree with your point about the “high road.” Google releases lots of open R&D “for free” that’s bankrolled by its enormous advertising profits. I think you have to view their R&D efforts through that lens, because none of it would be possible without the monetization models enabled by the advertising business. Is Android, for example, a “high road” compared to Symbian, just because Android is free and open? Not in my view.
(Incidentally, it’s a lot like Xerox. Xerox PARC invented a ton of stuff that it didn’t patent and people freely used. But it was all bankrolled by their patents on copiers. When Xerox was forced to license those patents to Japanese companies, the money printing press disappeared and so did PARC.)
Granted, companies like Google and Xerox don't do things out of an altruistic desire to save the world. It's enlightened self-interest, and they do develop business models to make money.
The difference between their model and the more typical patent protection / royalty approach is that in the open case, anyone else can use the research to do whatever they want. Rather than being siloed within a single company, and protected through legal means, the research is free for anyone else to leverage. This results in best of class open solutions rising to the top, and avoids the problem of everyone reinventing the wheel in their own little wheelhouse.
I care less about how much money Google is making, & more about how much access I have to the research they are sponsoring.
3-5 years will tell whether it's a triumph or a tragedy. It's certainly not possible to know today. Fans were quite sure Beta meant the end of VHS, but there are so many other (more important, in many cases) factors than the technology.
The fact that AV1 is explicitly not patent-encumbered is a very important feature.
It really is an interesting story, maybe someone will write about it some day.
Here's a video he created a few years ago, taking apart some popular misconceptions about how digital audio works: https://xiph.org/video/vid2.shtml
He does. I love his posts. Some say you don't really understand something until you can explain it to someone else. By that metric he's probably in the top 5 codec folks in the world today.
Any idea how many more articles are planned, so that can adjust my expectations a bit?
Can we somehow convince you to part-time more technical blogs and videos? Like others here said, you have a knack for writing really accessible technical articles without them feeling dumbed down.
I don't know if you're just more into doing the hands-on work of developing the codec (I can imagine that's exciting), or if it's a question of technically not being hired to do that.
In the latter case, maybe "The People" could vote with their wallet to convince the higher-ups otherwise?
Like, Ubuntu lets us decide what we want them to spend our donations on. If we had that with xiph.org and it had a section that said "more technical blogs!", or maybe even something like patreon a set-up where you can pledge to automated repeat donations for each blog entries, wouldn't be surprised if there were enough nerds willing to send donations specifically to justify to the managers that you may spend more time on outreach.
Intel implemented VP9/h265 decoding in its processors just recently.
So we have to buy new processors.
Software encoder/decoder implementations will also be a thing in the interim, until the hardware is ready.
Even if AV1 and VP9 share structurally similar components, no VP9 decoder is going to work out of the box. It may mean that people who produce AV1-capable encoding/decoding hardware have a better starting point, though (provided they had VP9 hardware, beforehand)
Sadly Netflix doesn't see it that way and demands that I update my perfectly functional i7-2500 system to a newer one to play back their 4k content.
(Sorry, slightly OT rant).
First, the reprogrammability of FPGAs means lots of unused gates and less density, or wasted space. with flash technology nowadays, it doesn't waste as much power, but the footprint is just so large it's not worth it.
Second, a lot of the things that make custom silicon fast can be found in GPUs, such as ALU, MAC, FFT, FIR, SIMD and other DSP slices. Sure, there is a whole additional layer of optimization that can be done with custom silicon, but the computational powerhouses already exist. It's mostly (not all) a matter of reprogramming the memory movements from block to block, delegating certain operations to the CPU and general optimization. Most new codec algorithms can probably done pretty well with GPUs on phones these days.
And unfortunately, cell phone companies aren't interested in keeping older HW relevant :( Other industries might, though. My friend said lots of military radar projects he worked on used FPGAs.
Dedicated, fixed silicon will outperform FPGA's in performance and energy use practically every time.
For rare/uncommon use cases, having an FPGA you can adapt to your algorithm is fantastic, but for a use case as common and day-to-day as decoding video, a dedicated chip is far more ideal.
>PureVideo occupies a considerable amount of a GPU's die area
MSU's comparison: http://www.streamingmedia.com/Articles/News/Online-Video-New...
x265 complained about the recent MSU test: http://www.x265.org/x265-incorrectly-represented-msus-2017-c...
But equally x265 likes to refer to MSU comparisons when the results make x265 look good: http://www.x265.org/
An x265 post referencing another paper which says HEVC outperforms AV1. The post insinuates AV1 might not be royalty-free. Reads like an attempt to spread fear, uncertainty, and doubt: http://www.x265.org/excited-av1-closing-doors/
But indeed it's a little bit ironic for someone advocating for HEVC to be concerned about patent royalties and licensing.
x264 or x265 was basically never tuned for PSNR or SSIM what ever metrics. Basically the 30-40% AV1 claimed to be better in those value. And in that case, the x265 developer isn't wrong, it is very much the picture, not the values that matters.
VMAF should hopefully bring much more useful numbers. But we will have to see and put some time in since it is very new.
At transloadit we’ve been testing the reference implementation but to encode a video it’s about 20x slower than h264 still :) there will be many improvements to speed up but it’s expected to remain a lot slower (up to 4x as slow) or so I’ve read.
In markdown that would put the 4 in italics, where it's not supposed it still works as emphasis, but can be a bit confusing.
People like it because it has better licensing.
That is definitely going to change when real optimized encoders are written, especially since all the major CPU/GPU manufacturers are part of the Alliance for Open Media.
That said, I hate to criticize them too much for this detail, in fact, congratulations are in order. It’s been a great engineering effort so far, and hopefully this will continue with the reference implementation evolving into some highly optimized players and encoders.
Aside from the engineering accomplishment you can imagine that it took quite a bit of business effort and coordination to get all these players together and on the same page.
I’d make a bet whoever first proposed the extension name has never been involved with user experience design. But again it’s relative. An excellent chef made a great meal, if just the garnish on top is not perfect it’s not a lot to complain about.
I see “regular users” making decisions based on container file formats all the time, as to whether the file will be playable on their favorite device: DVD player, smartphone, portable music player etc.
It’s not very reliable, or accurate, but if you see an .mkv that’s usually h.264 or hevc, .webm is vp8/vp9 and .avi will play on your DVD.
There’s really no technical reason why file extensions should reflect the container format.
If I rename “BunnyVideo.mkv” to “BunnyVideo.av1” VLC will still play it just fine. But now I can tell at first glance that it will not play on my legacy Smart TV or last year netbook with no hardware decoding.
Whether that AV1 video is actually in a MKV or WebM container doesn’t really matter at all to your average user. It only matters if it will play.
Maybe it would be even better to just have .av1.mkv instead, but double file extensions don’t work on Windows and people think it’s a trick to get you to run malware.
A container contains more than just a single video stream.
They can also contain audio streams and subtitles, with multiple of each, of different types.
What are you going to call a file containing an AV1 video stream, an MPEG2 video stream, a 2-channel MP3 audio stream, a 5.1-channel AC3 audio stream, SubRip (.srt) subtitles for one language, and VobSub (.sub+.idx) for another language?
The above also means that just knowing the video codec is not enough to know whether your device can play it, since your proposition will be missing e.g. audio codec(s).
Trying to play a file with e.g. FLAC audio on a device that does not support it? Worst case it doesn't work at all, best case you get video but no audio.
All that is before you get to the actual video codec, which also complicates matters: For example, H.264 has profiles and levels, and not all devices/decoders support all profiles and levels. This means you also need to know the profile+level of the file, and the maximum supported by the device to tell whether it can play the file.
Of course you can make an “.h264” with opus audio but in reality no one does that. And anyway even if that wasn’t the case, once you decide to use that extension you agree to not do that.
Audio is also both easier to play on anything with software encoding and easy to reencode if needed.
Perfect is the enemy of the good.
What useful information does the “.mkv” or “.webm” extension offer exactly? It’s “correct,” but also completely useless. The only signal it provides is accidental and unreliable.
Might as well use “.video” and “.audio”.
People definitely do that. And even if they didn't, significantly more people create h264+flac releases, and flac is also supported by much fewer devices than e.g. AC3.
E.g. .mkv does convey useful information. A player needs to support the container format as well, not just the codecs for the streams it contains. Different containers also support different features (not all containers are equal), and there are tools that only work for some containers.
The signal provided is only accidental and unreliable as a proxy for audio or video codec, not in general.
.webm is also a bit of a bad example here, since it's just a subset of matroska (mkv), but more importantly it initially only supported a single video codec and a single audio codec (VP8 & Vorbis). Of course, that has changed since, with the addition of support for VP9, AV1, and Opus.
For example a new extension could mandate minimum support for codecs or features. Like wp explains, matroska file extensions are .MKV for video (with subtitles and audio), .MK3D for stereoscopic video, .MKA for audio-only files, and .MKS for subtitles. I’m not aware of any reason an extension couldn’t represent minimum package requirements, like audio/video/at least av1 codec, etc, that would work for most cases, while potentially still retaining extensibility where practical.
Instead you create a standard that mandates container X, support for video codecs Y[, ...], and support for audio codecs Z[, ...]. Then you can document a player/device as supporting that standard and also name video files after it.
The gist of the comment should have referred only to what I brought up, which as you point out is a minimum simple standard for packaging and contents as it could relate to a file extension.
How is a user to know he might need to install a special codec to play a video? This is why OS and hardware support across many devices is essential to the success of new formats. This is why the Alliance for Open Media ensures AV1 succeeds across the spectrum of devices available.
I don’t think anyone decided to do it like that. It’s just been the way it always was.
Anyway, early on container formats were actually correlated with the codec, or at least the multimedia stack you needed.
Let’s remember the early file extensions:
.avi .rmbv .wmv .mov .flv
Everyone knew what they needed to install to play one of these files. It’s only later, starting with .mkv really, that container formats stopped having anything to do with codecs.
Even so, .mkv used to for a long time just mean h.264 to most users. If you had a device that could play mkvs it could play h.264 as well.
The confusion started full on with HEVC. To many users mkvs suddenly just stopped being playable.
I don’t see any reason to continue this trend. AV1 should just use “.av1”. Any device/program that can play av1 can also handle mkv/webm. And no one will be confused.
The only reason for this “I agree” comment is to underscore one last time there is absolutely no reason it has to specifically be .av1, and in fact several reasons it should be something else.
Whoever is involved please just walk right into the execs office and make the case to bring it up at the next meeting, it’s not too late.
I'm pretty tech savvy and I haven't downloaded a video file to my computer from the internet in years.
..So there you have it from an "unbiased^, first impression" -standpoint.
^ Unbiased in terms of any exposure / prior knowledge of this project.
For more history on this, see https://en.wikipedia.org/wiki/DivX#History
Source: worked at DivX
My question is, at what level do these encoders work at? Are they basically specialized SIMD instructions, or are they fully featured chips that take raw data as input and produce byte streams in the format of the protocol? Or somewhere in between?
Then you have APIs that try to disassemble all the typical stages of video codec processing that applications can call and GPU drivers can implement, which serve as a bridge between hardware-assisted decode and application code. These are APIs like DXVA or one of the several in use with Linux .
 https://en.wikipedia.org/wiki/Unified_Video_Decoder  https://en.wikipedia.org/wiki/Video_Coding_Engine  https://en.wikipedia.org/wiki/Nvidia_PureVideo  https://en.wikipedia.org/wiki/Nvidia_NVENC  https://github.com/hermanhermitage/videocoreiv  https://wiki.archlinux.org/index.php/Hardware_video_accelera...
For example, if I understand the Chroma-from-Luma prediction; then the maths involved is just 2-d linear regression (once for U vs. L once for V vs L.). That's a pretty generic task. Even when with the domain-specialisation that it is done over pixels in an encoding block, we are still talking about concepts common to all codecs.
So maybe there existing acceleration hardware can already do it. But even if it can, the require primitive needs to be exposed to software if new protocols are to benefit from it. So my question (and maybe Boxxed's too) is whether the hardware interface is low-level enough for such adaptability.
Does anyone have any evidence of this claim?
And six months ago they had improved on that to do live streaming with 32 cores: https://bitmovin.com/constantly-evolving-video-landscape-dis...
Twitch wants to use AV1 for live streaming. Here's a talk on the "switching frame" feature of AV1 for use in live streaming: https://www.youtube.com/watch?v=o5sJX6VA34o
In real-time. If the encoder is too slow for that it is the bottle-neck.
1. the uploader would have to quickly send data to the server, which requires fast encoding in a different format (so lower-quality than AV1, and introducing artefacts from a different lossy encoding)
2. this low quality then gets recompressed with a different codec with different artefacts, and possibly worse compression due to the artefacts introduced by the first codec
When uploading to video sites that are not live, the idea is usually to use as high-quality an input as possible first, in which case this won't be much of a problem.
But I might be mistaken on how bad this affects live-streaming! Perhaps the first codec throws out information in a way that smooths out the video, which then makes re-encoding with AV1 faster and have better compression with minimal extra loss of details.
In any case, I wasn't referring to inter-frame parallelizability, I was referring to intra-frame parallelizability which doesn't require a delay.
Now, this works fine for large broadcasts where there is no two way communication between a streamer and their followers or in some type of competitive situations, but that is not the norm.
Case in point, a 40 second 1080p clip would take over 8 hours to encode on an i7-4800 - that's less than 2 frames per minute. You need a lot of horsepower to cut that down to 40s / 60 FPS.
When measured in, say, total sum of all CPU cycles and hence total energy spent, then yes: the decoder is the bigger deal. So your argument holds for non-live videos.
From the article, near the end.
The current AV1 is only royalty free.
When will all AAC patents expire? It is over 20 years since AAC introduction.
(although might not be up to spec which was published only recently: https://aomedia.org/the-alliance-for-open-media-kickstarts-v... )
I don't dispute that it is theoretically possible - of course it is, people are making this codec for actual use, and I'm looking forward to it - just that it doesn't seem doable right now.
Not sure how they can make this statement.
VP8 was patent encumbered all over the place and eventually required licensing from MPEG-LA. Surely AV1 would be infringing on at least some of HEVC's expansive patent pool. Unless of course Google is willing to indemnify user's against patent infringement lawsuit. Which would be really big news.
Matt Frost of Google is quoted as saying:
"Obviously, if we have an open source codec, we need to take very strong steps, and be very diligent in making sure that we are in fact producing something that's royalty free. So we have an extensive IP diligence process which involves diligence on both the contributor level – so when Google proposes a tool, we are doing our in-house IP diligence, using our in-house patent assets and outside advisors – that is then forwarded to the group, and is then again reviewed by an outside counsel that is engaged by the alliance. So that's a step that actually slows down innovation, but is obviously necessary to produce something that is open source and royalty free."
The original source is a video, but this quote transcript is on Wikipedia: https://en.wikipedia.org/wiki/AV1#cite_note-frost-sme2017-16
A patent for 'user status update suggestions' for example. Yuck.
Terrestrial broadcast standards have to move slowly. Internet-based streaming and storage can move much faster, and it's supremely beneficial to be able to do so. MPEG-2's now-expired patent portfolio isn't too useful for that, and hasn't been for quite a while now.
For decades, companies like Sony, Sharp, NTT, etc., bankrolled that R&D by licensing patents to the technology, and selling devices incorporating the technology.
Today, the Internet enables new monetization models. Companies like Apple and Google can directly monetize video content through iTunes and Google Play (or indirectly through Youtube). They can use that to bankroll their investment in new video codecs.
Was it? Can you name a single patent? MPEG-LA never did.
The never do. It's like the mob: "pay us or you'll be in trouble". You can either pay the MPEGLA (and by the way, you will never know who inside the MPEGLA decided to sue you), or face consequences (and discover which patents are supposedly infringed)
Yes, it probably would, but if these patents are owned by members of AOM it is not an issue
AOMedia Video 1 (AV1), is an open, royalty-free video coding format designed for video transmissions over the Internet. It is being developed by the Alliance for Open Media (AOMedia), a consortium of firms from the semiconductor industry, video on demand providers, and web browser developers, founded in 2015.
So far as I can see, that's the final state. Google ended up having to get a license from the MPEG-LA VP8 patent pool.
> Google's Serge Lachapelle notes that today's agreement is "not an acknowledgment" that VP8 infringes on any of the patents claimed by MPEG LA
All of that was covered in the article as well.
Something about the medium and the message here.