
Mozilla and Xiph's new Daala video codec to compete with h.265 - mtgx
https://xiph.org/daala/
======
y0ghur7_xxx
I'm a bit out of the loop, but what was wrong with Theora?

Daalas goal "is to provide a free to implement, use and distribute digital
media format and reference implementation with technical performance superior
to h.265."

Wasn't that exactly what Theora was as well?

~~~
Xcelerate
Theora was never very good. Not really the fault of its creators -- rather,
all the good methods of compression are being hoarded by patent trolls (MPEG
LA and friends), even though most of the methods are obvious. I believe this
new attempt at a codec will also face the same issues.

~~~
nullc
>rather, all the good methods of compression are being hoarded by patent
trolls

Nah, that is simply untrue. Theora was designed in the late _90s_ and targets
a different computational envelope than AVC— it's pretty darn good compared to
MPEG-2 (or even MPEG4 part-2/DivX) which are more contemporary.

Today there is a much large computational budget available, plus new
experience and understanding.

~~~
Xcelerate
>> rather, all the good methods of compression are being hoarded by patent
troll

> Nah, that is simply untrue.

Could you provide something to back this up? I cannot find it now, but I seem
to recall an x264 developer discussing the difficulties in implementing an
open source codec equivalent to or better than H.264 and the problem was
mainly due to the fact that all the best algorithms had been patented.

Perhaps if you are a Theora developer (or know one), you can clarify.

~~~
nullc
> Perhaps if you are a Theora developer (or know one), you can clarify.

I am a Theora and Opus developer, although I'm not exactly sure what
clarification you'd like.

I can tell you that in my codec experience the patents have seldom (never?)
been a major direct barrier to progress by precluding an essential
technique... By their nature they tend to not absolutely foreclose anything
except outright copying a technique— and even then only in a specific context,
and video coding and signal processing are old enough and mathematical enough
that the basics are unpatentable. (Keep in mind— the patent office does not
believe it allows patenting pure mathematics, their definition of "does not"
and "pure mathematics" and mine may not agree, but their evil is finite and
thus surmountable)

The impediment from patents seems to most often take the form having to spend
time and effort in patent research and negotiations, not being sure that you
can just implement some randomly discovered research, erroneously discarding
some useful techniques which could be used but isn't worth the effort to
clear, and the cost of spending time with attorneys teaching them enough codec
engineering— or, frankly, spending time correcting misconceptions on the
Internet— rather than coding.

Or in short, _A patent_ is almost never a big problem for a designer, but _the
patent system_ wastes a lot of effort by creating big largely non-engineering
overheads that sap engineers time and energy. The system also complicates or
discourages cooperation by creating odd business motivations and incentives to
be secretive (especially about defense strategies). But the non-free codecs
suffer from some of these costs and pressures too— plus additional ones like
arguing over which winners and losers get their techniques in the format and
thus a share of the fees (and access to cross licensing).

This mess also exists just as much outside of media codecs. But the
enforcement is less active— I suspect partially because codecs are unusually
attractive to monetize due to network effects (switching is MUCH more
expensive) and because they make nice attorney-understandable atomic units of
infringement that map to visible features. "I own h264, you have h264. Pay
up!" works better than "I own computing the absolute value by this series of
bit operations. You may or may not do this, I can't tell because of your
obfuscated binary. Pay up! Maybe?". The network effect also makes very narrow
patents more useful— it's much easier to write a patent that reads on
implementations of H.264 (a single format with a fairly exact specification)
than one that reads on any format similar to H.264. Very narrow patents are
less costly to obtain and enforce (Less risk of invalidation), and the network
effect says that you must implement H.264 not almost-H.264 so they are no less
good at extracting royalties. But no one really cares if their kernel uses
xor-linked-lists or not, and it's usually no compatibility problem to switch
if someone starts making threatening noises.

If you were thinking of Jason Garrett-Glaser's early technical analysis of
VP8, I don't think he was saying quite what you walked away with... but it's
also important to note that Jason didn't (at least at the time) have
substantial video coding experience outside of his work on x264 (and some
related things in ffmpeg), and didn't have substantial experience working with
patents, and had never been involved in a RF codec effort of any kind
(including the ones attempted in MPEG). He's "just" a particularly brilliant
guy that came in writing assembly code like a force of nature and made x264
much better. To the extent that he could have been emitting the impression
that everything in video is patented he would have been just repeating the
not-very-well-informed conventional wisdom.

Patent infringement is all about the fine details— so even a patent expert's
off-the-cuff comments are going to be somewhat useless. I'd take Jason's
thoughts on the latency of PAVGW as the word of God, codec patents, when he'd
not even looked at the involved patents? meh. Later revisions of his analysis
substantially softened some the remarks, but few people went back to read
them. (I recall that I was especially amused by some of the things he derided
as being 'unnecessary', as I thought I had a reasonable guess which patents
Google/On2 had been specifically dodging).

------
ksec
Well it is not very new as it has been discussed for a while. But at least now
more people knows about it.

But as the recent Reference Implementation has shown H.265 is moving ahead
faster then the previous H.264 has happened. Its quality is already quite
good, and many manufacturers are ready to show their Encoder and Decoder in
CES.

So for both Software and Hardware Encoder, they are waiting for the final
draft and final validation before making them for sale.

VP9 is finally shaping up to be good. VP8 was never up to x.264 encoder
standard. At least VP9 is making some quality improvement. But most of the
current VP9 material just reads like marketing BS.

While i would love to see Daala, or a codec from Mozilla rather then Google to
succeed. It is just not going to happen. Mozilla has never been known for fast
moving, and the amount of engineering required for a Video Codec is just huge.
Even Google couldn't pull it off. And i doubt Mozilla can. Since Daala is
still in research stage and it would take at least 2 year before anything
done. By the time H.265 is already well ahead.

~~~
espadrine
> While i would love to see Daala, or a codec from Mozilla rather then Google
> to succeed. It is just not going to happen. Mozilla has never been known for
> fast moving, and the amount of engineering required for a Video Codec is
> just huge.

Clearly Mozilla and Xiph couldn't, in your world, have pulled off an audio
codec that blows everything away. Oh, wait, they did!

<http://www.opus-codec.org/comparison/>

~~~
zanny
I can't encode my video files or music with Opus because you can't arbitrarily
seek the track without backing up around 80ms and playing back up to the point
where I want. So it isn't available in webm containers yet for that reason.

~~~
nullc
It won't be in "webm" because WebM is a very narrowly defined profile of
MKV+Vorbis+VP8 (which is important for compatibility), and Google already made
a decision to not use Opus in "WebM" to avoid confusion.

Perhaps you instead meant MKV? (now responding to the comment on frames/MP3)
Yes. That is an issue for MKV with Opus. The problem is that the MKV container
has no mechanism to signal a stream level or seeking level preroll other than
just having frame boundaries. With Opus you need to seek several frames back
and decode forward (not just a single frame) in order to converge. This isn't
unique to Opus: Various video formats can be encoded with rolling intra (e.g.
H.264) a mode which is useful for conferencing because it avoids fat
keyframes. Since the movie 'backup' scene doesn't encode this way the mkv
ecosystem's solution to these sorts of streams appears to be, so far, to fail
to seek in them. I'm sure it will get worked out eventually.

~~~
Mysterion
\-- It won't be in "webm" because WebM is a very narrowly defined profile of
MKV+Vorbis+VP8 (which is important for compatibility), and Google already made
a decision to not use Opus in "WebM" to avoid confusion. \--

The preceding comment is false:
[http://src.chromium.org/viewvc/chrome?view=rev&revision=...](http://src.chromium.org/viewvc/chrome?view=rev&revision=173663)

------
tommyka
I don't see how they should have a better chance of adoption than Web.M which
I havnt seen much support and usage since Google announced it.

~~~
mtgx
I think it can beat h.264 in the same way h.264 beat DivX and others. We will
soon have to switch to a _new_ , more efficient codec anyway, and why not
switch to a free and open source one like Daala or VP9, especially if they end
up better than h.265?

h.264 won over VP8 because VP8 came too late, and it was unfinished and not
even that good when Google bought it. By the time Google made it on par with
h.264, h.264 already won the format war. But with a new format war coming up
soon, and with other codecs like Daala and VP9 being ready to compete
(hopefully) from day one, h.265 might not be the default option as h.264 was.

Also I think all new chips that came out this year had hardware acceleration
support for VP8, and most media players support it now, too, as well as
WebRTC. So even if h.265 wins again, hardware manufacturers may also support
at least one of these open source codecs in the future, alongside h.265.

------
kevingadd
More details: <http://wiki.xiph.org/Daala>

------
jfb
I'd be interested in some documentation. I wish them luck.

------
shmerl
How does it compare to VP8?

------
netvarun
Obligatory XKCD link: <http://xkcd.com/927/>

~~~
ldng
Well it is different. The aim is not competing standard here but rather
unencumbered alternative so to me that XKCD comic isn't very relevant.

~~~
aniket_ray
On the contrary, is very relevant and this joke is an old one used many times
in the codec community.

Theora was supposed to be the unencumbered alternative, then it was VP8 and
now it's Daala. Unfortunately, all these and even closed/patented codecs have
just muddied the waters rather than being the "alternative" that solves it
all.

Getting adoption of codecs is much harder in today's world than it ever was.
The online, mobile and browse worlds are so divided than no company would ever
want to give another an advantage by adopting the other companies codec.

