Hacker News new | past | comments | ask | show | jobs | submit login
High quality GIF with FFmpeg (pkh.me)
482 points by ux on Mar 16, 2015 | hide | past | favorite | 124 comments

ffmpeg is downright magical - just don't get caught using the one in the Ubuntu/Debian repositories. Compile your own ffmpeg if you have any need to do serious work; the Libav fork just isn't as capable. The reasons are obvious if you look into the development philosophies of the two projects.


I think that ffmpeg is returning in Ubuntu 15 http://www.webupd8.org/2014/11/ffmpeg-returns-to-official-ub...

Actually in Debian as well.

> ffmpeg is downright magical

Actually, in my experience, ffmpeg is the linux-program that by far outshines any other program in... failing or crashing :/

I'm sorry to say this. And, unfortunately, there isn't any other open source program doing a better job in terms of file format support.

I would agree that ffmpeg has warts. It is pretty poorly documented. Also, filter pipelines are finicky and not commutative, which can be super confusing, but most of the time you don't need them. Aspect ratio syntax is a disaster, but aspect ratio specifications are a little bit of a disaster, so I don't know how much to blame ffmpeg. But simply failing or crashing? I can't say I've ever had a crash that wasn't the result of a misconfiguration on my part.

What's nice is that on linux VLC is basically a QT4 GUI wrapper around ffmpeg, so you can do pretty much whatever you need to do from within VLC.

I think ffmpeg deserves my support on this as well. I maintain a video system at work and using ffmpeg for conversions. ffmpeg has never given me troubles (we compile our own ffmpeg).

The video play "mpv" is also almost a wrapper around ffmpeg, it seems.

Let me guess: you are using Ubuntu?

Use the version at http://deb-multimedia.org

    echo "deb http://www.deb-multimedia.org stable main non-free" > /etc/apt/sources.list.d/deb-multimedia.org.list
    apt-get -q update; apt-get -qy dist-upgrade
    apt-get -qqy install ffmpeg

Debian unstable has official ffmpeg packages, and as soon as the next release completes, those packages will migrate to testing and future stable.

There are a number of useful codecs and filters that have non-free licenses. Debian packages will never have them enabled. Self compiles are the only way to get the ffmpeg kitchen sink.

Almost all of the questionably licensed and non-free codecs were removed from ffmpeg years ago. The only remaining bits left are those for AAC support via libfaac, and ffmpeg has its own reimplemented version of AAC that doesn't depend on libfaac.

You might be thinking of mplayer, which has extensive support for non-free codecs, such as the "w32codecs" bundle.

Seems unlikely that the security team would be happy having both ffmpeg and libav in Debian, so a GR would be needed to override them.

I always build my own as, afaik, it's the only way to get full support for the more interesting codecs e.g. fdkaac & h265.

You can ask him to add them, he's pretty responsive to requests.

I used the http://www.deb-multimedia.org/ repository in a pinch for ffmpeg recently. Third party repo aside, what criticisms are there for this ffmpeg compared to a compiled copy?

Don't compile it just use the official binaries!

That's a good option if the official binaries support all the codecs you need and have all the correct hardware acceleration flags set for your particular CPU/GPU. That's ... theoretically possible, I suppose.

Programs like ffmpeg compile in code paths for all hardware acceleration flags (mmx, sse, avx), then use runtime detection to use the right code for your cpu.

ffmpeg binaries come with all possible codecs and all possible CPU features. The first is actually bad, because it will do far more things than you've actually tested and can secure. (In fact, nobody on earth has tested most possible ffmpeg command lines.)

The second is the way almost all sensible x86 software should behave.

So it's using a simple Median Cut quantization... which is okay, but you can get great results using an octree quantizer, and without dithering. Dithering kills your compression savings in GIF so you'd want to avoid it if possible. In any case, you can see the difference a better quantizer makes here:


Any method that makes axis-aligned subdivisions (which includes both standard median cut and octree), even in a space like HSV, is going to be pretty suboptimal. There's probably been more sophisticated things done since I looked at this in the 1990's, but Xiaolin Wu's 1992 approach [1] works quite a bit better than both (including the 1991 Wu v2 implementation of an octree method linked elsewhere in this thread). It's based on recursive PCA, dynamic programming to assign colors along the principal axis, and K-means refinement in CIE L* u* v* coordinates. Not the fastest algorithm, but the topic is "high quality".

And you can still dither the image without completely destroying compression. You just have to use an ordered dither [2].

[1] Xiaolin Wu, "Color Quantization by Dynamic Programming and Principal Analysis", ACM Transactions on Graphics 11(4): 348-372, 1992. https://dl.acm.org/citation.cfm?id=146475

[2] https://xiphmont.livejournal.com/35634.html

Aligned subdivisions are not that much of a problem actually.

I've tested median cut that can cut at various angles and variants that cut out spheres, and it didn't make much difference.

Choice of the box to cut and the location where you cut is more important. Wu's method is nice in this case because it exactly measures variance of sets after a split, while median cut just estimates.

However, Wu's method needs lookup tables for the whole color (hyper)cube, which requires either large (or for RGBA still ridiculous) amount of RAM or reduced precision, and losing bits of input is way worse than having suboptimal division.

The biggest wins I've found (and implemented in libimagequant) were:

• Voronoi iteration after generating palette (like a final step in LBG quantization). This shifts axis-aligned subdivisions to local optimum (I've tried applying that after every subdivision, but it didn't matter — once in postprocessing is enough!)

• Adjust weights/counts in histogram from quality feedback measured from remapped image. If you're aiming to minimize MSE, then that's not too expensive and it's similar to gradient descent.

> However, Wu's method needs lookup tables for the whole color (hyper)cube,

You're talking about the 1991 method. The 1992 paper I cited explicitly claims as one of its advantages that it does not require lookup tables (and thus avoids the usual step of pre-quantizing down to 5 bit color to get a reasonably-sized table).

Your two big wins sound an awful lot like "K-means". I'd believe they are responsible for the bulk of the improvement (they're also the bulk of the computation), but they mean you don't have axis-aligned partitioning anymore, which was my point.

The problem is that K-means is notoriously sensitive to the initial clustering. For unsupervised learning, I'd want to do as good a job there as possible, and applying PCA means you are picking the split direction that exactly minimizes variance in the orthogonal directions. Random restarts are also a nice way to escape local minima, but are even more expensive computationally.

It is similar to K-means, but with exactly the crucial difference that clusters are not random, but picked by a guided subdivision algorithm.

For apng2gif I modified Wu's method to use 64x64x64 cube (instead of original 32x32x32), and the results are much better. It doesn't require too much RAM, especially compared with what was available in 1992.

Ironically, that ACM paper is a scan of a printed out paper, with half-toned images, ruining the comparison images. Is there a purely digital version of that paper out there?

EDIT: Is it this color quantization algorithm that is linked on Wu's own webpage? http://www.ece.mcmaster.ca/~xwu/cq.c

The median cut quantization with a few tweaks (using the the variance typically - see the post for more) actually gives very good results. According to your post, it seems it will not provide much better results with Octree, if any. That calls for testing though (I won't but feel free to, the infrastructure is ready for you to hack in), as improvements are very welcome.

Concerning the dithering, you can disable it (dither=none). And indeed, it kills the compression, mainly when using animation, that's why I added the diff_mode option typically (see at the end of the post).

If you want high-quality palette try libimagequant: http://pngquant.org/lib

pngquant's method is superior to octtree:

• it uses Voronoi iteration to get local minimum and minimize effects of axis-aligned subdivisions

• it uses sort-of gradient descent to optimize subdivisions

• to some degree edges and noise are included in histogram weights

• does not need to throw away any bits of precision (most implementations just naively discard 2-3 bits, because of RAM limits in '80-'90s)

pngquant undithered:


vs leptonica's:


(most implementations just naively discard 2-3 bits, because of RAM limits in '80-'90s)

Might it also have something to do with earlier VGA DACs having 6 bits per color channel? I seem to remember that newer video cards required setting a separate register to enable eight-bit-per-channel palette entries.

Edit: never mind, the discussion of Wu's algorithm in another thread, specifically https://news.ycombinator.com/item?id=9216252, makes the RAM limitation clear.

> Dithering kills your compression savings in GIF

Not as much as err-diff if you use custom-palette ordered dithering, since the ordering stays the same from frame to frame the transparent-pixel optimization applies, unlike error diffusion dithering.

It's what photoshop uses to create optimized gif, although their particular approach is patented alternative algorithms exist, e.g. http://bisqwit.iki.fi/story/howto/dither/jy/

yeah, error-diffusion dithering is a non-starter for animations. gotta use either pattern or ordered/bayes to eliminate jitter and compress well.

here's how my quantizer does: full-color, undithered, floyed-dithered: http://imgur.com/a/lrLTd#0

repo: https://github.com/leeoniya/RgbQuant.js

demos: http://o-0.me/RgbQuant/

this image is actually a pretty poor example of quantization, it's too easy. full color gradients give a much better indicator of behavior, quality and performance.

if you really are interested in different algorithms, check this out: http://www.codeproject.com/Articles/66341/A-Simple-Yet-Quite...

error-diffusion dithering explained (and kernels): http://www.tannerhelland.com/4660/dithering-eleven-algorithm...

That is pretty amazing, thanks for sharing. I am interested in different quantization methods, if only for screenshot usage ( I am working on a remote control product, and the quantization used to be quite poor before I added in octree ). Of course, most of my code is in C# but I can at least take a look at the algorithms, and if they are MIT licensed, I can see what it would take to convert them.

I once won a code golf "popularity contest" competition that was about dithering a grayscale image. The algorithm was perhaps interesting. More so my language of choice: Fortran.


nQuant is a quantizer in C#, Apache licensed


Wu v2 is probably the best hands-off quantizer that I have seen. It's not the fastest and probably too resource intensive for embedded apps, but it is remarkably good. I would recommend it (or anything based on it) without hesitation.

I'm actually in the process of porting it to js using typed arrays and possibly WebGL/GLSL. You can compile it pretty easily to asm.js for max pure-js perf via Emscripten, but I don't know how clean its API/interface will be and if it will leak globals based on how the code is structured [1]

[1] http://www.ece.mcmaster.ca/~xwu/cq.c

Interestingly, both of the quantized versions appear slightly brighter in all colors than the original. It's easier to see in the dithered image, but also visible in the undithered image.

>As you probably know, GIF is limited to a palette of 256 colors.

No, it's not. OK, the current, wrong, implementations do limit it to 256 colors, but that's not a limitation of gif. Mulitple frames without a delay allow multiple pallets and multiple times 265 colors.

But this limitation is just another reason not to use it.

Can I use Webm?


Yes, you can, and you should. Or h.264:


Can you show an example of an animation with more than 256 colors?

As you said, the hack to get more than 256 colors is to redefine a sub rectangle with a new palette. But that's also the mechanism used to make animations. Are you able to get fluid playback with that?

Note: I'm not talking about new frame being just a new sub rectangle, more about N sub rectangles for the same frame, then a delay event, then M sub rectangles for the new frame, etc

None that works with current browsers implantations, because they erroneous ignore no delay between frames and wait anyway. But that's a bug in the gif decoder, not a limitation of gif. OK, this is hairsplitting, but it illustrates how borken the state of gif is.

Actually, regarding the delay, the spec reads

    "vii) Delay Time - If not 0, this field specifies the number of hundredths (1/100) of a second to wait before continuing with the processing of the Data Stream. "
A zero delay is most certainly not covered by the spec, and anyone using one is creating GIFs that do not meet the spec.

[0] http://www.w3.org/Graphics/GIF/spec-gif89a.txt

You can achieve no delay without using a zero delay. Graphic Control Extension blocks may be omitted between images and so multiple images can come in a row without anything defining a delay between them.

See the last instance of the phrase "graphic control" in the document to see that multiple images can come with no intervening GCE blocks and see the paragraph containing the 5th instance of that phrase to observe that a GCE block only affects the first image following the GCE.

Maybe someone who cares about this should submit patches? All of the major browser engines are open-source.

This would probably cause regressions with other GIFs that neglect to specify a delay but are supposed to have one. It's hard to imagine this universal behavior is accidental. It's probably like everything else on the Web — a hack to keep things that shouldn't work from actually not working.

I'm fairly sure that the reason for this universal behaviour is that there are plenty of broken, looping gifs out there with inter-frame delays of 0. I used a version of Konqueror which would spin at 100% of a core when it encountered one of these.

A better hack (but it's still a hack) might be to wait one frame (or longer) before returning to the beginning of a looping gif. Given that this would be significantly harder than the current approach (which is apparently either delay = max(delay, 16ms) or if(delay < 16) delay = 100) and the only reason for it would be to support this pretty-niche multi-palette composited-frame thing, it seems like it would be hard to argue for.

Are there GIFs that specify a delay on some frames but not others, and expect all frames to have a delay? Because it should be easy to detect animated GIFs that specify no delay for any frame, and treat them all as having a delay.

A workaround for the workaround: for a high-color gif, add a blank "animation" frame at the end that has a delay, after all the no-delay buildup elements.

However, that's unlikely to be necessary because if you are just doing a high-color image with no animation, you may as well use JPEG or PNG.

As I understand it, that would still block this "large-palette GIF" trick, because those would also consist entirely of frames with no delay.

I assumed that the "large-palette GIF" would have animation. Otherwise, why are you bothering with tricky GIFs at all, why not just use a PNG (or JPEG, depending on the image type)?

> All of the major browser engines are open-source.

In a world where Internet Explorer is not a major browser.

Hah, I can't believe I actually forgot Internet Explorer was still around. Good point.

While GIF is surely inferior to short looped videos, it's still ahead in terms of usability/portability: Drag & drop the file to the desktop and re-upload elsewhere (or embed in your own website). AFAIK this experience is unmatched by web video.

Global support for h.264 video in the browser: over 90%. And on the desktop it's close to 100%. That's IMO just good enough.

That's not what I was referring to. Yes, technically videos are superior, no doubt. The nice thing about GIFs is though that they are treated like images by the browser and the OS. That makes them easier to handle by non-technical users (think imgur, Tumblr, etc.).

I just realised that imgur does not support webm. I thought it does. Thats kind of stupid. For other websites it'd be as simple as adding a mime type to the list of accepted uploads. Unfortunalty you are right, right now webm is still exotic.

WebM is dying – since Microsoft and Apple aren't implementing it, you're basically asking whether it's worth doubling your file storage to support a format available in maybe 60% of browsers versus one supported by 90%:

http://caniuse.com/#feat=webm http://caniuse.com/#feat=mpeg4

That's even more compelling when you remember that the only browser which had releases with support for WebM but not H.264 is Firefox and that's been phasing out for awhile: support for H.264 shipped in FF21 on Windows, FF26 on Linux and FF35 on OS X.

Unless you have a lot of Mac users who don't upgrade, it's probably not worth the hassle particularly since WebM doesn't compress as well as H.264. If VP9 ships that story could change if it delivers an advantage over H.265 compelling enough to get Microsoft or Apple to integrate it or the larger video sides to add it to their toolchain. For most places it'd have to be really compelling to be worth nearly doubling their storage costs.

I've seen WebMs on Imgur. When saved and opened to check the encoding some are VPx, while others are h.264 encoded in a WebM container.

I can probably find some links later. As I see no option to upload video directly I'm wondering if it requires an account, or whether it was their new(ish) GIF conversion process (IIRC some look like original videos though).

You can't upload webms tough.

The silly thing is if you have a gif and change the URL on imgur to read ".gifv" you'll get WebM!

.mp4 is also well supported by modern operating systems. The only place where there's a problem would be a website which allows you to upload images but not videos.

I'm inclined to say that the enormous compression wins are worth getting the remaining stragglers to upgrade (both of the sites you mentioned already do this, for example, as do other common targets like Twitter, Facebook, etc.).

GIF autoplays inline in the page. Video doesn't (it animates on tap w/ all other content obscured). (On the iPhone, anyway.)

Makes it completely unsuitable for many uses of GIF.

Would love to be wrong or out of date on this, so feel free to let me know if I am ;)

Aside: if you disagree with this philosophy and do not want GIFs to auto-play, there's a very handy Firefox extension to accomplish just that: https://addons.mozilla.org/en-US/firefox/addon/toggle-animat...

I find it makes reading the web a much more pleasant experience.

A large proportion of browsers won't auto-play video, but will auto-play GIFs, so that rules out many uses.

Also, I'd bet that video tags eat memory. A web page with ten animated GIFs might make the user puke, but a web page with ten auto-play videos will likely make the web browser puke :)

> I'd bet

No, you're wrong. You have this preconception that videos are obligatorily 300 megabytes or something.

There's virtually no reason for in-hardware decoding of a smaller file to make the browser struggle more than with gifs, which are much less optimized and never in-hardware.

Yes, but the cost is tremendous. First there is the bandwidth. Next it's the hardware: Virtually all mobile Systems-of-a-chip have hardware support for JPEG, PNG, H264, etc decoding. Power-wise animated GIFs are terrible.

Don't forget Ogg/Theora.


I use this shell script for generating gifs out of .mov files: https://gist.github.com/artursapek/5b3d15ecac5ff75593c4

I can probably improve it further by using some of what's in this article, but it's already faster and better than any online tool I've found.

You can pipe the frames from FFmpeg to convert to avoid temporary files. http://superuser.com/a/730389

Neat! Unix pipes are amazing. Thanks.

> (convert -delay 5 -loop 0 $tmp_dir/ffout*.png $2) >& /dev/null

So using ImageMagick for this... How is this memory wise BTW? (all frames are in memory at once or not?)

Yes, all frames will be in memory prior to making the GIF. Probably fine for most GIFs, but if you want to go FFmpeg | ImageMagick | FFmpeg for processing a video, or maybe for a longer GIF, you can use this script to keep only a frame at a time in memory: http://www.imagemagick.org/Usage/scripts/process_ppm_pipelin...

Not sure, actually. Never had it be enough of a resource hog to even think about it.

ffmpeg doesn't get nearly enough credit. As long as you can grok the various settings, it makes you feel like McGyver with a paperclip. We work often with media from unusual sources, and after the first "can we do this with ffmpeg" filter there's very little left to handle.

This is pretty cool. I use GIFs for marketing for my SaaS startup[1] because it has the best support for autoplaying across devices, but the file sizes are a bit high (690 KB and 270KB for screencasts of a few minutes each). I use licecap[2] for recording and Gifsicle for optimizing, but perhaps I should give ffmpeg a go and compare the results.

[1] https://zapla.co

[2] http://www.cockos.com/licecap/

This is an interesting discussion, thank you. It would be great to see it compared with the parallel ImageMagick methods: http://www.imagemagick.org/Usage/anim_opt/

ImageMagick is very popular, but apparently the algorithms in pngquant gives excellent results. Don't forget to include it if you do comparisons with the different solutions.

Also keep in mind that the compression efficiency and quality depends a lot on the content and how you play with the encoding options (how surprising...). I tried to cover most of the cases in my post, but make sure you are "fluent" with the other tools as well to make an objective comparison.

Overall, I think FFmpeg trump-card is that it includes the whole stack; timing, decoding, filtering, encoding, everything is in.

There is no reason to create new content in .gif and not webm as of now. FFmpeg is really nice though.

I'm not going to tell you anything new here, but there are reasons to use GIF instead of video. WebM isn't supported on Safari & IE. H.264 isn't officially supported on Firefox (though I believe OS X and Window may now provide their own decoders that Firefox uses). To get a similar experience to GIF, you need two formats and it needs to be embedded in an HTML file using a <video> tag. Compare that to the GIF experience, where you link to the image and it works the same way everywhere.

There simply is not a video-based standard that implies "short, silent, autoplayed, looped video" in the way that animated GIFs do. There are ways to approximate the behavior in HTML, but none of them are perfect in the way GIFs are.

Actually, Firefox supports h264 nowadays. Cisco bought a license to the patent, and gave it to Mozilla for free, so they can freely distribute an h264 decoder.

I wanted to use FFmpeg in an iOS app, but I ran into licensing issues—the FFmpeg wrapper [1] and FFmpeg itself [2] are both LGPL 2.1, which I didn't think you could use in an iOS app.

Since this project does exactly that, I took a second look and found a discussion on that specific use case [3]. It didn't quite clarify things, but it looks like it _might_ be legal to use FFmpeg in an app.

[1] https://github.com/OpenWatch/FFmpegWrapper [2] https://www.ffmpeg.org/legal.html [3] https://trac.ffmpeg.org/ticket/1229

Oh hey, I wrote that! There are plenty of apps that are distributing LGPL code on the App Store, for example, VLC for iOS [1] includes FFmpeg. The static linking requirement on iOS 7 and earlier prompted some proprietary app developers (like Sparrow [2]) to distribute their object files to allow re-linking.

Now that iOS 8 offers dynamic frameworks, I'm not sure if this situation has changed. Even though they are dynamically linked, it is impossible for the end user to replace a library due to the code signing requirements.

1. http://www.videolan.org/vlc/download-ios.html 2. https://web.archive.org/web/20131013023029/http://sprw.me/lg...

Gifsicle is very good at making highly optimized GIFs, but slightly more cumbersome as you have to convert each frame to a separate image first.


What would be really cool is if ffmpeg/avconv gained the ability to create lossy LZW GIFs, like in this tool:


Nice article and good job implementing it. I would have preferred a small explanation of the filter graph option (-lavfi), but I guess that's outside the scope of the post.

Yeah, but the article is already quite long so I omit the "details" on that...

Basically, it's just to specify a complex filtergraph: in the case of -vf you're just passing one video stream which gets filtered by successive filters. In the case of -lavfi (or -filter_complex if you prefer) you can feed multiple inputs (and get multiple outputs). In this case, the second input is the palette image.

You can find more information on http://ffmpeg.org/ffmpeg-filters.html#Filtering-Introduction

The improved screenshots do look quite improved, with less screen door effect. Nice.

Very nice of you to share this with the community! Examples are just awesome.

I like your domain.

And to complete the circle, run the gif through Gfycat and check out the mindblowing bandwidth savings:



Video: 73k

I wish I had NSA-level overview of the internets to see how much bandwidth is wasted on poorly compressed file formats and protocols. Its weird we're talking about gif in the age of h.264 and png. So much for the "burn your gifs" patent protest from a few years ago.

Seems to me, social media sites like Facebook and Reddit have revitalized the gif for cheesy animations and memes. Its only fairly recently that hosts like imgur are converting them down to h.264 or webm for massive bandwidth savings.

We have an older guy here at work who thinks the alternative to flash is just hosting 20mb to 50mb gifs on our websites. I think people like this are still fairly common because they don't understand how terrible gifs are. Its funny the ways the web doesn't move forward sometimes.

Imgur, popular on Reddit, actually recognized the problem and invented GIFV (which is HTML-wrapped mp4/webm)


Unfortunately Facebook doesn't allow easy video embeds.

> invented

I think 4chan introduced silent, looping, control-less webms for the same purpose (gif replacements) before imgur.

I hate the "invention" of Gifv with every ounce of my soul. If it's a Webm or a MP4 file then say so. There are HTML5 tags for them and their looping. The least the Internet needs is a redundant file extension for what has been done by Gfycat and 4chan/8chan for years now.

Unfortunately a VERY large chunk of the web still can't see webm. http://caniuse.com/#feat=webm

Doesn't IE use directshow filters for playback? As long as anything like ffdshow or lavfilters is installed it could just use those.

Afaik firefox uses system-provided h.264 decoders to support that since they don't want to ship one with firefox.

Firefox ships their own decoder now, Cisco donated a license of the MPEG patents to Mozilla.

I have been told that's only used for webrtc. playing <video> tags uses ffmpeg on linux or media foundation on windows.

>playing <video> tags uses ffmpeg on linux

If only. It uses that broken pile of abstraction called GStreamer.

Yes please. Let's leave GIF in the 20th Century, where it belongs.

If only iOS (and Android?) allowed you to play inline videos the way you can on the desktop, and the way they play animated GIF's inline. Besides allowing us to adopt video throughout and save on bandwidth and have better video quality, this will be a huge productivity boost to developers: we can finally stop having the GIF vis JIF pronunciation debate.

Firefox on Android plays inline videos just fine. Sites like GFYcat work as expected, and loop appropriately.

"Works" with the android stock browser, however there the controls stay in the way. Same for Chrome. Does not work with opera mobile.

Much to my annoyance. I desire to disable all autoplay of all inline videos across all browsers on my phone. Quite difficult to achieve.

I think this should definitely be an option. This is a really personal preference and there is no reason you should suffer through autoplay videos if you don't want to.

I would want it on, at least for some sites, like Gyfcat and Imgur because otherwise my phone slows down to a crawl, trying to load and render a GIF.

Unless somebody's producing ads, why would he insist on having the video "inline like on the desktop" on the small screen of the mobile phone?

To replace GIFs. I don't have a problem with users being able to opt out, etc. But GIF's are still useful, yet a huge drain on resources. If for nothing, I want my cat videos still available inline without having to click them.

Hardware video decoders mostly only handle one video at a time and if you could multiplex the one decoder the chip has, to keep up with multiple videos playing at once it just might take a milliwatt or two.

Yes, what kind of crazy designer would do that: https://news.ycombinator.com/item?id=4531088

Because gifs already work that way on phones, and that's how users want it.

I never wanted any movable ads while trying to read something on the phone. Even less the movable the cat gifs. If something is not worth the click, it's certainly not worth to be inline in the text.

live screenshot demos?

AFAIK the Apple Watch doesn't support video of any sort, so if you want to get moving pictures on your watch the gif is the only way. Gif is about to make a comeback if anything.

is it possible to have pixel-perfect animations (avoid compression artifacts completely) while using h.264 or VP8 and still get good filesize?

also, what about alpha transparency? video formats do not have this. GIF is shitty for video clips, but it's not without uses. it just needs to be replaced by APNG [1].

[1] http://en.wikipedia.org/wiki/APNG

VP9 has lossless encoding support.


apparently there's also hints of alpha channel, but basically no support for it outside of chrome's experimental flags:


Indeed. While this is a nice hack, GIF really needs to just die now.

I like to imagine going back in time to 2005-06 and telling myself "You know what's going to come back in a big way? Animated GIFs!"

I wouldn't believe myself whatsoever.

I would like to go back in time to 1993 and make Tim Berners-Lee add video support to the <img> element so that we could do things like this today:

  <img src="video.mp4" width="800" height="450" volume="0" autoplay loop>
If this had happened no one would need to use GIFs. :(

Marc Adreessen came up with <img>[1], not TimBL.

[1] http://1997.webhistory.org/www.lists/www-talk.1993q1/0182.ht...

Besides, it's easy today to frown upon the choices made at a time where common image formats included XBM, a plain text subset of C!

One day, people will be sad that we didn't anticipate 3d models in <img>. It makes no sense to add support with the current state of technology.

> GIF: 1 MB

> Video: 73k

Yes, but with some loss of quality http://imgur.com/1qkJYEA

(not as much as I thought, though)

You can, say, nearly double the quality, get a 100kb file or so, and have it be much sharper. The settings he's using are fairly compressed. Generally, you should be able to go down to 1/10th the file size without too much of a quality drop.

It is technically possible to do both directions of the GIF<->video conversion on-the-fly at all points of the network. Web servers could deliver/generate video files for clients that request GIFs, transparent proxies and caches could grab the video files and translate back to GIFs for dumb clients, and browser plugins could seamlessly replace GIFs on a page with the video files.

In fact, behind the scenes this might be happening now - technically your ISP could be doing this and the 1MB GIF you download has been reconverted from a video file back into a GIF at your local proxy cache!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact