Actually, in my experience, ffmpeg is the linux-program that by far outshines any other program in... failing or crashing :/
I'm sorry to say this. And, unfortunately, there isn't any other open source program doing a better job in terms of file format support.
What's nice is that on linux VLC is basically a QT4 GUI wrapper around ffmpeg, so you can do pretty much whatever you need to do from within VLC.
echo "deb http://www.deb-multimedia.org stable main non-free" > /etc/apt/sources.list.d/deb-multimedia.org.list
apt-get -q update; apt-get -qy dist-upgrade
apt-get -qqy install ffmpeg
You might be thinking of mplayer, which has extensive support for non-free codecs, such as the "w32codecs" bundle.
The second is the way almost all sensible x86 software should behave.
And you can still dither the image without completely destroying compression. You just have to use an ordered dither .
 Xiaolin Wu, "Color Quantization by Dynamic Programming and Principal Analysis", ACM Transactions on Graphics 11(4): 348-372, 1992. https://dl.acm.org/citation.cfm?id=146475
I've tested median cut that can cut at various angles and variants that cut out spheres, and it didn't make much difference.
Choice of the box to cut and the location where you cut is more important. Wu's method is nice in this case because it exactly measures variance of sets after a split, while median cut just estimates.
However, Wu's method needs lookup tables for the whole color (hyper)cube, which requires either large (or for RGBA still ridiculous) amount of RAM or reduced precision, and losing bits of input is way worse than having suboptimal division.
The biggest wins I've found (and implemented in libimagequant) were:
• Voronoi iteration after generating palette (like a final step in LBG quantization). This shifts axis-aligned subdivisions to local optimum (I've tried applying that after every subdivision, but it didn't matter — once in postprocessing is enough!)
• Adjust weights/counts in histogram from quality feedback measured from remapped image. If you're aiming to minimize MSE, then that's not too expensive and it's similar to gradient descent.
You're talking about the 1991 method. The 1992 paper I cited explicitly claims as one of its advantages that it does not require lookup tables (and thus avoids the usual step of pre-quantizing down to 5 bit color to get a reasonably-sized table).
Your two big wins sound an awful lot like "K-means". I'd believe they are responsible for the bulk of the improvement (they're also the bulk of the computation), but they mean you don't have axis-aligned partitioning anymore, which was my point.
The problem is that K-means is notoriously sensitive to the initial clustering. For unsupervised learning, I'd want to do as good a job there as possible, and applying PCA means you are picking the split direction that exactly minimizes variance in the orthogonal directions. Random restarts are also a nice way to escape local minima, but are even more expensive computationally.
EDIT: Is it this color quantization algorithm that is linked on Wu's own webpage? http://www.ece.mcmaster.ca/~xwu/cq.c
Concerning the dithering, you can disable it (dither=none). And indeed, it kills the compression, mainly when using animation, that's why I added the diff_mode option typically (see at the end of the post).
pngquant's method is superior to octtree:
• it uses Voronoi iteration to get local minimum and minimize effects of axis-aligned subdivisions
• it uses sort-of gradient descent to optimize subdivisions
• to some degree edges and noise are included in histogram weights
• does not need to throw away any bits of precision (most implementations just naively discard 2-3 bits, because of RAM limits in '80-'90s)
Might it also have something to do with earlier VGA DACs having 6 bits per color channel? I seem to remember that newer video cards required setting a separate register to enable eight-bit-per-channel palette entries.
Edit: never mind, the discussion of Wu's algorithm in another thread, specifically https://news.ycombinator.com/item?id=9216252, makes the RAM limitation clear.
Not as much as err-diff if you use custom-palette ordered dithering, since the ordering stays the same from frame to frame the transparent-pixel optimization applies, unlike error diffusion dithering.
It's what photoshop uses to create optimized gif, although their particular approach is patented alternative algorithms exist, e.g. http://bisqwit.iki.fi/story/howto/dither/jy/
this image is actually a pretty poor example of quantization, it's too easy. full color gradients give a much better indicator of behavior, quality and performance.
if you really are interested in different algorithms, check this out: http://www.codeproject.com/Articles/66341/A-Simple-Yet-Quite...
error-diffusion dithering explained (and kernels): http://www.tannerhelland.com/4660/dithering-eleven-algorithm...
I'm actually in the process of porting it to js using typed arrays and possibly WebGL/GLSL. You can compile it pretty easily to asm.js for max pure-js perf via Emscripten, but I don't know how clean its API/interface will be and if it will leak globals based on how the code is structured 
No, it's not. OK, the current, wrong, implementations do limit it to 256 colors, but that's not a limitation of gif. Mulitple frames without a delay allow multiple pallets and multiple times 265 colors.
But this limitation is just another reason not to use it.
Can I use Webm?
Yes, you can, and you should. Or h.264:
As you said, the hack to get more than 256 colors is to redefine a sub rectangle with a new palette. But that's also the mechanism used to make animations. Are you able to get fluid playback with that?
Note: I'm not talking about new frame being just a new sub rectangle, more about N sub rectangles for the same frame, then a delay event, then M sub rectangles for the new frame, etc
"vii) Delay Time - If not 0, this field specifies the number of hundredths (1/100) of a second to wait before continuing with the processing of the Data Stream. "
See the last instance of the phrase "graphic control" in the document to see that multiple images can come with no intervening GCE blocks and see the paragraph containing the 5th instance of that phrase to observe that a GCE block only affects the first image following the GCE.
A better hack (but it's still a hack) might be to wait one frame (or longer) before returning to the beginning of a looping gif. Given that this would be significantly harder than the current approach (which is apparently either delay = max(delay, 16ms) or if(delay < 16) delay = 100) and the only reason for it would be to support this pretty-niche multi-palette composited-frame thing, it seems like it would be hard to argue for.
However, that's unlikely to be necessary because if you are just doing a high-color image with no animation, you may as well use JPEG or PNG.
In a world where Internet Explorer is not a major browser.
That's even more compelling when you remember that the only browser which had releases with support for WebM but not H.264 is Firefox and that's been phasing out for awhile: support for H.264 shipped in FF21 on Windows, FF26 on Linux and FF35 on OS X.
Unless you have a lot of Mac users who don't upgrade, it's probably not worth the hassle particularly since WebM doesn't compress as well as H.264. If VP9 ships that story could change if it delivers an advantage over H.265 compelling enough to get Microsoft or Apple to integrate it or the larger video sides to add it to their toolchain. For most places it'd have to be really compelling to be worth nearly doubling their storage costs.
I can probably find some links later. As I see no option to upload video directly I'm wondering if it requires an account, or whether it was their new(ish) GIF conversion process (IIRC some look like original videos though).
I'm inclined to say that the enormous compression wins are worth getting the remaining stragglers to upgrade (both of the sites you mentioned already do this, for example, as do other common targets like Twitter, Facebook, etc.).
Makes it completely unsuitable for many uses of GIF.
Would love to be wrong or out of date on this, so feel free to let me know if I am ;)
I find it makes reading the web a much more pleasant experience.
Also, I'd bet that video tags eat memory. A web page with ten animated GIFs might make the user puke, but a web page with ten auto-play videos will likely make the web browser puke :)
No, you're wrong. You have this preconception that videos are obligatorily 300 megabytes or something.
There's virtually no reason for in-hardware decoding of a smaller file to make the browser struggle more than with gifs, which are much less optimized and never in-hardware.
I can probably improve it further by using some of what's in this article, but it's already faster and better than any online tool I've found.
So using ImageMagick for this... How is this memory wise BTW? (all frames are in memory at once or not?)
Also keep in mind that the compression efficiency and quality depends a lot on the content and how you play with the encoding options (how surprising...). I tried to cover most of the cases in my post, but make sure you are "fluent" with the other tools as well to make an objective comparison.
Overall, I think FFmpeg trump-card is that it includes the whole stack; timing, decoding, filtering, encoding, everything is in.
There simply is not a video-based standard that implies "short, silent, autoplayed, looped video" in the way that animated GIFs do. There are ways to approximate the behavior in HTML, but none of them are perfect in the way GIFs are.
Since this project does exactly that, I took a second look and found a discussion on that specific use case . It didn't quite clarify things, but it looks like it _might_ be legal to use FFmpeg in an app.
Now that iOS 8 offers dynamic frameworks, I'm not sure if this situation has changed. Even though they are dynamically linked, it is impossible for the end user to replace a library due to the code signing requirements.
Basically, it's just to specify a complex filtergraph: in the case of -vf you're just passing one video stream which gets filtered by successive filters. In the case of -lavfi (or -filter_complex if you prefer) you can feed multiple inputs (and get multiple outputs). In this case, the second input is the palette image.
You can find more information on http://ffmpeg.org/ffmpeg-filters.html#Filtering-Introduction
GIF: 1 MB
Seems to me, social media sites like Facebook and Reddit have revitalized the gif for cheesy animations and memes. Its only fairly recently that hosts like imgur are converting them down to h.264 or webm for massive bandwidth savings.
We have an older guy here at work who thinks the alternative to flash is just hosting 20mb to 50mb gifs on our websites. I think people like this are still fairly common because they don't understand how terrible gifs are. Its funny the ways the web doesn't move forward sometimes.
Unfortunately Facebook doesn't allow easy video embeds.
I think 4chan introduced silent, looping, control-less webms for the same purpose (gif replacements) before imgur.
Afaik firefox uses system-provided h.264 decoders to support that since they don't want to ship one with firefox.
If only. It uses that broken pile of abstraction called GStreamer.
I would want it on, at least for some sites, like Gyfcat and Imgur because otherwise my phone slows down to a crawl, trying to load and render a GIF.
also, what about alpha transparency? video formats do not have this. GIF is shitty for video clips, but it's not without uses. it just needs to be replaced by APNG .
I wouldn't believe myself whatsoever.
<img src="video.mp4" width="800" height="450" volume="0" autoplay loop>
One day, people will be sad that we didn't anticipate 3d models in <img>. It makes no sense to add support with the current state of technology.
> Video: 73k
Yes, but with some loss of quality
(not as much as I thought, though)
In fact, behind the scenes this might be happening now - technically your ISP could be doing this and the 1MB GIF you download has been reconverted from a video file back into a GIF at your local proxy cache!