Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla Advances JPEG Encoding with Mozjpeg 2.0 (blog.mozilla.org)
309 points by dochtman on July 15, 2014 | hide | past | web | favorite | 113 comments



(Note: I used to be employed by Mozilla, and in that capacity I was the owner of Mozilla's image decoders. I've been disconnected from all decisions for almost a year, though.)

The main take-home here is that while Google's numbers all show WebP as being objectively better, the metrics they chose for comparison were relatively bad (i.e., some of them didn't take into account colour or didn't model colour correctly), and once you accounted for that the numbers were not nearly as good a story for WebP; in some cases, JPEG outperformed it.

The facts that (1) WebP was not terribly compelling technically, (2) JPEG is already supported by everything on the web, not to mention devices and mobile phones etc, and (3) there's still headroom to improve JPEG in a backwards-compatible way, meant that WebP was (and, it seems, remains) a non-starter.


Until JPEG supports transparency, it leaves a vast hole where a good lossy alpha enabled format is needed - namely icons. With the high resolution of mobile devices, using PNG for this use case is a huge waste. Regardless of the quality differences, WebP fills a major pain for mobile and web developers. I really think Mozilla should just support it.


There are a few ways of making fully backwards-compatible "lossy" PNG: http://pngmini.com/lossypng.html

You can have icon files 3-4 times smaller, and large photorealistic images 2 times smaller than the regular PNG.


Also you can use svg, which is supported by the current versions of all browsers today, and the files will likely be smaller than webp, jpeg or png.


Rendering icons as SVGs is pretty horrible in terms of performance on mobile devices.


The hover effect on your demo images confused me.

I assumed it would switch between the original and the compressed version. Instead it's swapping out the background to demonstrate transparency, which could probably be made more obvious.


Yeah, depending on the image putting a bmp into a zip is actually significantly smaller than a png. Well, bmp has no alpha channel, but just as comment to the size of pngs. pngs even uses the zip algorithm in a way that is supposed to be optimized for images, but apparently it is not. E.g. the tiles here are a lot smaller as bmp in a zip: http://panzi.github.io/mandelbrot/

Ok, it put them into one single zip and can't remember if it was solid or not, so it might be the cross-file compression that makes the major difference here.


> Well, bmp has no alpha channel

BMP have formats which support 1-bit and 8-bit alpha channels http://en.wikipedia.org/wiki/BMP_file_format. Just open any file in Photoshop and save as BMP then click advanced. Not sure of browser support.


What? I thought zip and png use the same compression algorithm.


They do indeed.

If you're seeing BMP+ZIP being smaller than PNG it only means your PNG encoder is poor. This can be easily fixed with a PNG optimizer like Zopfli, AdvPNG or OptiPNG.


I've had the best luck using opitpng and then pngout.

https://github.com/ajslater/picopt ...will do this automatically for you.

Pictopt: A multi-format, recursive, multiprocessor aware, command line image optimizer utility that uses external tools to do the optimizing.

For an all in one GUI for Mac approach to this, try ImageOptim


PNGOut is definitely better than OptiPNG most of the time:

http://www.olegkikin.com/png_optimizers/


zopflipng (with -m switch) is close to or better than the best on those five images:

    1_zopflipng.png  8.466
    2_zopflipng.png 18.739
    3_zopflipng.png  8.012
    4_zopflipng.png 91.879
    5_zopflipng.png  1.117


zopfli: not a png optimizer.



Yes, I even mentioned that.


Plus with chrome already supporting it, WebP has ~50% market share in most countries.

Now we'll have two similar but competing technologies and web developers will simply resort to the older formats.


One browser is not enough. Image formats have massive network effects ā€” people want to share, save and remix images. You need support the format in image viewers, editors, file browsers, mobile apps, websites.

As an exercise, try having your avatar only in the WebP format and see how useful it'll be.


WebP is mostly handled via transparent proxies - meaning people with proper browsers and Android mobile devices get WebP (large market share), everyone else gets (larger) PNGs.

That way you don't really care either way on the backend.


Saying 'proper browsers' gives away your bias there, since only the Blink WebKit engine supports it (Chrome/New Opera), which is under 20% of the desktop market worldwide.


Saying 'desktop market' gives away your bias there too.


Chrome has approximately the same worldwide market share as Firefox on the desktop. Both put together are eclipsed by IE. The metrics that disagree with this stat measure 'hits' (aka pre-2000 web metrics) instead of 'visitors'.


How much of a gain do you get in WebP versus a properly optimized PNG for icons? I can't imagine it's a compelling difference.


Simple 2 or 3 color icons might not win a lot, but there is also a whole genre of photo-realistic icons which would have a lot to gain from WebP like formats.


I've been trying to pull up some data but I keep coming back to JPG v. WebP. Got a link to anything by chance?


http://geeks.everything.me/2013/04/24/why-we-like-webp/

Suggests a 5x benefit over PNG for mobile app icons.


Google's own marketing materials claim 25-33% improvement at the same quality, so 5x improvement suggests they didn't compare apples to apples.

Comparing formats in a fair way is hard. "Looks almost the same" is a common fallacy ā€” small change in quality can have dramatic change in file size, e.g. JPEG 80 and JPEG 90 look almost the same, but one is 2x larger than the other!

For example lossy WebP doesn't support full color resolution, but JPEG by default does. If you don't correct for that you're telling JPEG to save 4x as much color, and the difference is usually imperceptible, because that's the whole point of chroma subsampling.


Read the doc - the suggested x5 improvement is compared to PNG, which is the only other way to currently solve the problem of an icon with transparent edges (let's leave GIF aside), not compared to JPEG. The gain compared to JPEG was about 50%.


If they compared PNG with lossless WebP the difference would be small (lossless WebP is still using gzip, just with smarter preprocessing).

When you compare with lossy WebP, then the right thing is to use lossy PNG as well.

I can make lossy PNG 4 times smaller than what you get from Photoshop (http://pngmini.com), so that'd make the comparison more fair, and the difference less dramatic.


The person you're replying to wrote a library which does lossy PNG compresssion which would be ideal for these use cases:

https://github.com/pornel/pngquant


I appreciate the information, thank you!


I had read in the past that HEVC was probably the best for still image lossy compression. [1]

Does anyone have further information on this?

[1] http://en.m.wikipedia.org/wiki/High_Efficiency_Video_Coding#...


Best from a compression perspective, perhaps. There's the minor detail of it being patent-encumbered. There's no license pool available yet, so you couldn't pay for it if you wanted to. The proposed terms of the pool that is organizing (with some of the known patent holders refusing to join) would require up to $25mln/year for the video codec. No idea if you'd be able to get a better deal for still-images only.

Meanwhile, JPEG is free.


According to the study linked in OP, it's still true. https://people.mozilla.org/~josh/lossy_compressed_image_stud...


Cheers!


But is there headroom for JPEG to replace animated gifs? If I look at mobile social apps these days, animated GIFs eat up enormous amounts of data. JPEG doesn't seem to have an answer for this, and no one has yet, it appears, made the <video> element work for this use case.


> no one has yet, it appears, made the <video> element work for this use case.

Gfycat seems to be getting more and more popular or at least I'm personally starting to see a lot more of their links in place of imgur gif links.

http://www.gfycat.com/


There's also a similar site[0] which is open source, so you could deploy it yourself if you wanted to.

[0]https://mediacru.sh/


I really wish people on HN stopped linking to gfycat and linked to mediacrush instead. The former seems like a very scummy "lowest common denominator" type of site, while the latter is open source, allows self-hosting, has deterministic hashes, uses strong https encryption, ...


I looked over mediacru.sh and unless I'm missing something it doesn't serve the same purpose as gfycat. The point of gfycat is to upload a gif, it converts it to webm then it gives you a bit of embed code that will try to use the webm variant but will fall back to the gif when unsupported. I didn't see anything in mediacru.sh's code that would allow gif <-> video conversion, I tried uploading a few gifs on their site as well and didn't get any video options back from the API.


Mediacru.sh should give you a "view as html 5" option. It did for me when I uploaded a gif a few minutes ago.


It would be better if the iPhone would play <video> tags without user interaction. I get the reason why they disabled that back when the iPhone first came out, but GIFs use up more bandwidth than videos, so they're actually punishing users.


Disagree. On the consumer side, I care more about auto-playing audio than bandwidth. On the content owner side, I only want my videos to autoplay (in most circumstances) if it were accompanied by the audio. If I didn't care about audio accompaniment I can already use GIF.


Perhaps the iPhone should autoplay videos, but muted until you hit a speaker icon (or something). Either way, forcing everyone to use GIF is horrible because of how inefficient it is.


Agreed, I think of this as the "Vine" model of interaction and it's what I'd prefer. Plus leave off the icon if the webm video has no audio track.


I'd like a global option and also some simple per site UI.

This is how the click to play in Firefox works, it's nice. One thing I haven't figured out how to do is trust embeds from a site (for example, Youtube and Vimeo both have well behaved embeds, so I'd like to whitelist them, but not many of the third party sites I see them on).


That's only a UI issue. You don't need compression to be horribly inefficient to achieve that :)

For example <img src=vid.mp4> could autoplay looped video without audio, just like GIF, except using tiny fraction of bandwidth and CPU (thanks to HW acceleration) than GIF.


this will autoplay no problem on iphone

http://phoboslab.org/log/2013/05/mpeg1-video-decoder-in-java...

:)


Expanding on Ray's comment above, this is really the compelling use case for WebP: a better container that does it all. Right now we have PNG, GIF and JPEG for three different use cases: animation, transparency and lossy encoding.

I can't easily pick and choose which of these I want, which means that we end up with 20MB GIFs that could easily be 1/5 the size, and crazy hacks to get lossy images working with transparency.

This is particularly painful for HTML5 gaming in my previous experience. For one of my projects that involved a giant, high-res game board with transparency (Nick's online Pai Sho), I ended up manually slicing the board up, converting center pieces to JPEG and leaving the edges in PNG. The images are all pasted together to make the final, seamless experience. What a PITA!


> no one has yet, it appears, made the <video> element work for this use case.

4chan has, and it's significant not only because they have a lot of traffic, but also because their implementation has to be pretty solid -- 4chan users would love nothing more than to troll the administrators by breaking this.

http://blog.4chan.org/post/81896300203/webm-support-on-4chan


That and 4chan is a big content generator for the internet. Everything further down the line (reddit, imgur, 9gag, tumblr) should support it. Especially tumblr, who can consume over a gig of ram if you scroll down long enough.


>4chan users would love nothing more than to troll the administrators by breaking this

Not really worth it. You'd post something once, then you'd get banned for a week. Unless you have a fleet of proxies or can keep changing ip address somehow, and can keep posting while they try to fix whatever exploit it is you found. Kind of hard to target admins when they're all anonymous, too.


If you have a dynamic IP address (most people do), getting a new IP is as simple as changing a byte in your router's MAC address.


It's probably the modem's MAC that matters for cable, and that is probably tied to the account.


I have a cable internet connection and know the difference between a modem and a router. You'll have to take my word for it that every time I change my router's MAC address, my public IP does as well.


Interesting. I've always had to activate a specific modem MAC with my cable provider.


Yes, no one except for Twitter: http://mashable.com/2014/06/20/twitter-gifs-mp4/


And MediaCrush: https://mediacru.sh/


Honestly, animated GIF (like all other image formats) is a terrible format for video-style stuff. It's fine for line art (well, sort of fine, anyways), but if you actually using it for video you really should be using <video>, because it takes into account temporal information. (gfycat is a service based around this already.)

It might be possible to do an APNG-style backwards-compatible animated JPEG, but it'll still be worse at it than video formats will be.


Thanks to the ACID3 test, all modern browsers support animation in SVG and svg in an img tag. Which means you can do extremely small vector based animations, or write an encoder that embeds jpegs and animates them like a film strip. You can have alpha transparency using a mask image via SVG filters. You can get really fancy and animate with deltas. The efficiency you lose by the base64 encoding is regained by gzipping.


But only animation via script, not declarative animation.


yes declarative animation. That is a specific test in acid 3 and you need declarative animation in svg to pass 100 on that test.

script animation is actually not supported when svg is used from an img tag. only declarative.


Actually SMIL (the declarative way to do animation with SVG, for those who don't know) has always been controversial (as far as I know, IE never passed that part of Acid3), is more or less on the chopping block as far as browsers are concerned, wasn't extensively tested in the original Acid3, and, in any case, was removed from Acid3 as of 2011[1]

[1] https://plus.google.com/+IanHickson/posts/JdHnqpuUER4


and yet in spite of this http://caniuse.com/#search=smil

Where have you heard or read that it is on the chopping block?


It was removed from Acid3 (or at least made optional), letting IE pass the test without implementing it. Furthermore SMIL is fairly complex, performs worse than script animation in today's browsers and (iirc) doesn't play well with CSS animation. I think I've read on the mailing list that there is effort done to harmonise those two in the future.


To review, it would seem that while you have a blog post from ian hickson making a passive reference to deprecating smil. Indeed he followed up by removing the feature from the ACID3 test. But the damage was already done:

evidence of actual standards activity and implementations from as recently as June 2014 show instead an active merging of css animation with svg animation into a single consistent model called "web animation" [1]

the implementation of this merging effort in chrome's blink rendering engine [2], and a rationalisation for this webanimation effort that promotes not a removal of the svg animation features (which are already implemented in everything but IE), but an expansion of their powers, being advocated by both google and mozilla [3]

Just because IE is lagging behind doesn't mean this feature is cut. Just because Ian Hickson says the feature is deprecated doesn't mean that it is, or that browser devs will follow whatever he says.

What it does mean though, is they are attempting to cut the feature's dependence on "SMIL", while retaining the currently implemented features. so I think that whatever works in browsers now, you can more or less rely on that working in future browsers as well. So yes, my suggestion is still useful and it's still awesome. And if you use css animations inside the svg (which totally works) instead of svg-smil animations, you can even get it working in IE10

[1]: http://dev.w3.org/fxtf/web-animations/

[2]: http://updates.html5rocks.com/2013/12/New-Web-Animations-eng...

[3]:http://people.mozilla.org/~bbirtles/web-animations/presentat...


APNG (https://en.wikipedia.org/wiki/Apng) is the answer, but Chrome doesn't support natively.


Webm's definitely been getting some traction as a replacement, if 4chan's support is anything to go by.


How trivial/fast is it to encode gif to webm?

I see many "big" webm videos on 4chan, much bigger than gifs. (in dimensions, not in bytes) and they are still smaller (in bytes) than the (dimensionally) smaller gifs.

Wouldn't it bring a huge traffic shrink, if every uploaded gif was converted to a, probably, tiny webm?


That's what twitter does.


Twitter now routinely auto-converts animated gifs into embeded web video.


you can use javascript mpeg1 decoder instead of <VIDEO> you get auto playing animated mpeg files instead of gif mpeg is pretty much jpeg stream with bonus I frames

https://github.com/phoboslab/jsmpeg

example:

http://phoboslab.org/log/2013/05/mpeg1-video-decoder-in-java...


>If I look at mobile social apps these days, animated GIFs eat up enormous amounts of data.

In mobile apps? Are they really animated GIFs?


mjpeg? It's used in many DC/DV.

We need a copy-pastable, mute by default <video> format to replace GIFs.


Motion JPEG is arguably even worse than GIF. It's literally just a sequence of JPEG images -- there's no extra compression; 100 frames of MJPEG are 100 times larger than a single image. GIF is a bad format for videos, but at least it can try to be more efficient than this.


Could you conduct a detailed article and publish it please? I think it would be useful for people who are interested in this but don't have the background about JPEG or WEBP internals.

NVM I found what I was looking for:

http://people.mozilla.org/~josh/lossy_compressed_image_study...


It should be noted, since it is nowhere to be seen in this post, it breaks API and ABI while still presenting itself as libjpeg version 6 to the system, which is very evil.

Open Issues:

https://github.com/mozilla/mozjpeg/issues/67 https://github.com/mozilla/mozjpeg/issues/21


It is sad that Mozilla wont implement WebP format in addition to GIF/APNG due to political reasons (I see no valid technical reasons to not include this code, especially after the news about including code for DRM). They've even disabled comments in the WebP bug [1]. And this bug was already second, they've closed first one [2]. You still can write your comments in this bug (about APNG removal) - it is not closed yet [3]

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=856375

[2] https://bugzilla.mozilla.org/show_bug.cgi?id=600919

[3] https://bugzilla.mozilla.org/show_bug.cgi?id=935466


We (as in browser vendors as a whole, which although I'm no longer directly part of I'm still around enough!) don't want to add stuff to browsers just because it's technically sound to include.

People propose plenty of things with technically sound code, but there's far more to developing a platform than including everything that's technically sound. You have to make decisions as to what you wish to include. Once a something becomes near universal (in terms of marketshare, not in terms number of browsers; see IE's peak for the clear examples there!) it becomes treated as a baseline; once it's there it's incredibly hard to remove. Every time something else gets added that's more code to maintain, more code to ensure correctness (and security!) thereof, and a higher barrier of entry to the browser market. As a result, the default answer to adding anything should be "no".


This just goes to show how broken the browser model is as an app platform. The fact you (browser vendors) have to decide which image formats to support should be a huge red flag demonstrating that something is fundamentally wrong with your platform.

Many things of interest developers might want to do currently require co-operation from the browser vendors, and their default position is 'no' unless it is in their commercial interest. Also, any one of the browser vendor vetoing the proposal is enough to kill it. Apple/Google/Microsoft are all huge companies with many products, so it's quite likely that you are a direct competitor to at least one product from one of the browser vendors, giving them extra incentive to veto most of the time.

Until the web is a platform where the vendors don't have the power to veto basic functionality like decoding an image it will remain the plaything of the browser vendors, used to bludgeon their competition.


Did you not read the section in the linked article about how WebP is not measurably better than JPEG?


So... having to serve huge 5x larger PNGs due to transparency is not measurably better?

Also our empiric measurments have shown, that WebP degrades more gracefully as you lower quality (compression artifacts aren't as visible) which allowed us to save about 30% of bandwith total by serving WebP to Androids and Chrome clients.


I've read it. Yep, looks like they can be compared for the static images. But what about animation? And there are plenty of websites using webp already. It is not like JPEG or WebP - why not just support them both!? (or APNG and WebP).


I imagine there's some amount of frustration on the part of people being told they should spend time implementing WebP, when APNG was a real working alternative circa ~2004 and nobody else was willing to implement it. (Admittedly, WebP does things APNG isn't capable of.)

Reminds me of MNG, where we had an animation format with support for alpha and either PNG or JPEG style compression of frames, but it never got a pervasive browser implementation so it died. (I get the impression there were strong technical reasons for that, of course.)

Coincidentally, MNG also indirectly solved this problem. JNG was a subset-format of MNG that provided JPEG-compressed static images with optional support for alpha, so implementing MNG implied implementing JNG. But we never got it on the web.

At the very least, instead of just saying 'WebP is bad', they're putting their money where their mouth is and advancing the state-of-the-art in JPEG encoding so that people won't need WebP (other than the alpha channel use case, that is.)


What's the advantage of animated WebP over WebM?

Are there many websites using WebP that aren't Google properties?

I'll be interested to see if a VP9- or Daala-based version of WebP is more compelling.


>But what about animation?

Try gif2webp and gif2apng converters on this example, and you will know:

https://www.google.com/logos/doodles/2014/world-cup-2014-54-...


I wouldn't take the fact that a bug has been filed about removing APNG as evidence that it's going to happen. It definitely won't in the near future.


It's especially amusing since Firefox supports WebM, which is VP8 video in a Matroska container. WebP is a VP8 frame in a simple container.


WebP is not just a VP8 frame. It now has its own separate library to implement decoding, and its animation features are far different from VP8.


How does this MozJPEG 2.0 encoding compare to JPEGMini? I've been using JPEGMini for all client projects, it results in optimized JPEGs that work on all platforms with minute differences, rarely visible to the naked eye, only when I subtract pre/post in Photoshop. Very rarely does it save less than 5% and for larger images (saved with PhotoShop, save for web) often 15-30% or even more.

Is the test set of images available?


jpegmini works by compressing an image as much as possible while keeping the distortion calculated by there particular metric below a threshold. To compare against mozjpeg you'd need to compress a bunch of images with JPEGMini and then compress those images with mozjpeg so that they were the same size. You could then compare the quality of the resulting images.

The test set of images is here: https://github.com/bdaehlie/web_image_formats/tree/master/im...

If you're interested, https://github.com/dwbuiten/smallfry is program similar to JPEGMini that uses mozjpeg as its backend.


How much extra saving does JPEGMini or smallfry offer over a sanely but conservatively encoded JPEG?


http://people.mozilla.org/~josh/lossy_compressed_image_study...

mozjpeg does Trellis quantization, there is no detailed explanation about jpegmini, a guess is it optimizes the DCT similarly to how it is described in this paper:

http://vision.arc.nasa.gov/publications/sid93/sid93.pdf


There are some papers by the JPEGMini people that give a pretty good hint as to how it works:

http://spie.org/Publications/Proceedings/Paper/10.1117/12.87... http://spie.org/Publications/Proceedings/Paper/10.1117/12.87...


jpegrescan AFAIK is the gold standard today. https://github.com/kud/jpegrescan


This is all fine and dandy, but whatever happened to JPEG-2000 and wavelet-based compression? Still licensing/patent issues blocking wide adoption?


It's still competitive performanc-wise but adoption has been lacking. Patents were an issue but there were other problems, too: uncertainty about rights and the high complexity of the spec (which for awhile was, IIRC, $1500 simply to get a copy) meant that there wasn't much traction in the open-source world, which in turn meant that many image processing apps either didn't support it at all or used a slow library with limited compatibility and incomplete spec support. There are several commercial libraries available but they also had compatibility issues (largely resolved by now) and required negotiating licenses.

Mike Shaver had a good comment in the Mozilla feature-request ticket which is no doubt representative of many other projects' concerns:

https://bugzilla.mozilla.org/show_bug.cgi?id=36351#c120

None of this is insurmountable, of course, but it made a lot of people hesitant to invest heavily in it. I wrote about the long-term risks awhile ago:

http://blogs.loc.gov/digitalpreservation/2013/01/is-jpeg-200...

The situation is getting better ā€“ OpenJPEG is now actively developed (see http://www.openjpeg.org), supports many of the more valuable features (e.g. tiled decoding so you can avoid decoding an entire large image to extract a fragment), and is becoming an official reference implementation:

http://comments.gmane.org/gmane.comp.graphics.openjpeg/773

Recently ImageMagick and Pillow (the maintained fork of the Python Imaging Library) added support for JPEG-2000 using OpenJPEG and the same trend seems to be happening in other places. The big remaining challenge is browser support for non-Apple browsers but that's now possible, if somewhat grotesque, using Emscripten on OpenJPEG to render into a canvas tag.


It's very sad Mozilla does not even mention their own codec, Daala, and only compares to HEVC etc. I suspect it's severely lacks funding.


Daala development is proceeding at a rapid pace. We're just not ready to bring Daala into this conversation quite yet.


There's a progress report for Daala that makes a similar comparison. They claim to beat JPEG, VP8 and x264 but not HEVC (yet!) on still images.


Not sure how fair a comparison it is but FWIW I built it and ran it (./cjpeg) over a few photos and got ~18% savings in file size vs. imagemagick convert at quality=50%


5%? I don't know but I hoped for significantly more.


IMO the linked study is pretty much useless, because of enforced 4:2:0 chroma subsampling. I guess this is because some of the newer codecs were originally video codecs where 4:2:0 is standard, but for still images I expect much better. Photoshop, which is obviously widely used on the desktop to make JPEGs, only uses 4:2:0 for quality <= 50 by default.

This also makes me question the use of metrics to measure image quality.


I don't think it would make sense to compare some codecs at 4:2:0 and some at 4:4:4. Unless they can all do 4:4:4, you have to pick a subsampling that they can all do and compare them at that so it's like-for-like.

If your goal is to demonstrate 'JPEG can deliver better quality than this format by using 4:4:4, because that format can't do 4:4:4', then that's great, but it's a different observation than the one they're trying to make. (Also, I would be pretty curious about whether the different subsampling produces superior file sizes.)


Facebook's $60,000 donation is interesting. The article says that the average file size reduction was 5%. Given that Facebook has built datacenters capable of storing exabytes of photo data in "cold storage", I wonder how much money Mozilla just saved Facebook in storage space costs in relation to how much they donated.


It would be interesting to learn what the cost of CPU overhead vs. additonal cost of 5% cold storage is - but I bet the FB folks did that calculation.


Seems to be source only. Has anyone built win64 binaries for this that they'd like to share.


On that subject; say what you want about how clunky CMake is, but it's really nice to be able to build projects like this with VC++ or plain MinGW-w64.


Is there a command line tool available? Iā€™d like to include it into my website build process.


Yes, building the source will produce (among other things) a command line application called "cjpeg", which can encode or re-encode.


- Alpha channel - Animation - Proper metadata - File size

The list of things that WebP offers that JPEG, PNG, or GIF alone will never address.


[deleted]


My guess is that the $60,000 is for initial exploratory work, and if they find a way to improve it substantially that requires more development effort a bigger donation will come.

And the saying about gift horses' mouths applies to FB's horses too ;)


If Google had done this, people would be in here complaining it wasn't open even though it was released under an open source license.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: