Hacker News new | past | comments | ask | show | jobs | submit login
Daala: Painting Images for Fun (and Profit?) (xiph.org)
117 points by walrus on Sept 24, 2014 | hide | past | favorite | 25 comments



This is so great to see. I know we're years away, but what does the HN community think it would take for Daala to reach the Opus-levels? Not just in technical achievement, but widespread usage/hardware support? I would love to have a completely free codec dominate video.


I love these articles by Monty for the technical insight they give me in to video compression, and I would love it if the end product from this work would result in a top-class codec which would atleast rival HEVC/VPX.

Realistically though I don't think this will lead to anything competitive at all, I really wish I am wrong but I just don't see it.

Just avoiding trampling on MPEGLA's huge patent chest would be a massive feat in itself, finding/developing compression techniques which aren't patented and capable of improving on the likes of HEVC while still be performant enough to be realistically usable strikes me as 'mission impossible'.

Again, would be so happy to be dead wrong.


Both Vorbis and Opus were somewhat miraculous in terms of scrappy outsiders beating the industry. This either points to the Xiph team being mad geniuses, or the audio codec field being somewhat warped by it's obsession with standardizing on design-by-committee, patent-driven corporate codecs.

Either explanation gives Daala a fighting chance.


BTW this article is attributed to Jean-Marc Valin.


Ah thanks, missed that.


First off, it should probably be 2x as efficient as HEVC, and be out at least 2-3 years before HEVC's successor is out. Then it needs to convince at least the biggest chip makers to support it to try and get some momentum. There's no way around that. Unless Google decides to push it in Android as a replacement to VP9. But that's like asking for a miracle, so chances for that are even lower.


This is exactly right and these are pretty much the same requirements for a non-free new codec. The bulk of the direction will be clear to those working or studying the standard a couple of years before the standard is official and hardware prototyping will occur over that period.


If it really does beat HEVC there should be lots of interest.


No it is much too late if all it does is beat HEVC. Adoption and hardware availability is already there for HEVC so that has almost certainly become the successor to AVC as the main compatible widely deployed codec. 50% encoded size for same quality and similar computational complexity would do the trick but a 80% encoded file size wouldn't do it.

I guess AVC will fade out over the next 5-10 years and HEVC will dominate from a couple of years time until 2025 or so unless something computationally feasible can encode to about 50% file size earlier. Xiph needs to be demonstrating something in about 2022 that encodes to 50% of HEVC size with computational complexity that will be feasible in 2025 and that complexity has to be manageable in (mobile) hardware and in software.


We are in agreement, Daala must indeed beat the top dog by 50%.


OK, sorry I may have agreed disagreeably. I just want be clear that it needed to be a whole generation beyond at that being a little better and Free wasn't going to be enough which based on WebM/AVC discussion in the past many people seemed to believe.


No problem, you were right to correct me.


widespread usage/hardware support

A lot of bribes to implementers, probably.


I wonder if offering permissively licensed HDL implementations would help uptake on the hardware side.


I started a Verilog implementation of part of Daala with this kind of idea. Google also provides a VP9 HDL implementation which has been included by a couple of chip vendors.

However, such a fixed purpose decoder is going to have to compete in die area with other video decoders on the chip. I feel that a better solution is DSP-based video decoding - examples include Broadcom's VideoCore. This is actually how most video decoders already work, but you generally can't edit the code. These decoders give the speed and low power of a dedicated hardware decoder due to special-purpose function blocks, but are still flexible enough to decode different video formats and include bugfixes after the hardware has shipped.


I'm thrilled to see people still thinking through an entirely different approach to image and video compression.


TLDR: Far better quality than low quality JPEG, same size file. Not a replacement for high quality JPEG due to visible artifacts. Major drawback is slow processing, but tweaks and a well-parallelizable algorithm should resolve this... and be ideal for GPUs (and thus very light on mobile device power consumption).

Search page for 'as a com' to get the definitive jaw-dropping image.


No comment on the technical stuff, but the paint algorithm applied to the video looks really pretty.


I would love to see an entire animation or TV show done in that style.


The codec wars are almost over thanks to Moore's Law. The decoder will soon be streamed with the content, taking up more than a couple seconds of bandwidth. And not surprisingly, the decoder will be written in Javascript/WebCL. The codecs no longer have to be installed on the system, they can be pushed along side the content. Two to three years from now, on modern hardware, you will be able to use the codec of your choice.


In two to three years, let's assume typical phone hardware is in fact fast enough to do realtime decoding of video with a shipped-along codec like this.

How long do you estimate a typical phone battery will last while doing that?


The majority of the code will be running on the GPU so it will actually be doing pretty good on battery life.


This would actually be a complete regression. I mean, what's the difference to you between Flash Player streamed per-page and Flash Player preinstalled and sandboxed?

And a custom codec would just be a browser plugin, only in bytecode.

What gives native playback its power and fan noise advantage over Flash is ASIC-based hardware decoding, which is difficult to customize. Though I'd be interested to see the battery life you get playing video in a loop with libavcodec compiled to JS...


How about a JavaScript-to-FPGA transpiler? :) Go grab a Zynq eval board (http://www.xilinx.com/products/boards-and-kits/EK-Z7-ZC702-G...) and implement that, we'll stick around! :)

In all seriousness, that would be really really cool. Especially if FPGA blocks become more common in general purpose/desktop processors.

It should be possible to design some kind of signal-processing API/framework in JS that lends itself to FPGA synthesis behind the scenes, right?


In theory a WebCL based codec could utilize hardware too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: