Hacker News new | comments | show | ask | jobs | submit login
Guetzli: A New Open-Source JPEG Encoder (googleblog.com)
600 points by ashishgandhi 37 days ago | hide | past | web | 128 comments | favorite

This seems to be optimizing for a "perceptual loss function" over in https://github.com/google/butteraugli/blob/master/butteraugl...

Looking at the code to that, it looks like 1500 lines of this:

    double MaskDcB(double delta) {
      static const double extmul = 0.349376011816;
      static const double extoff = -0.894711072781;
      static const double offset = 0.901647926679;
      static const double scaler = 0.380086095024;
      static const double mul = 18.0373825149;
      static const std::array<double, 512> lut =
          MakeMask(extmul, extoff, mul, offset, scaler);
      return InterpolateClampNegative(lut.data(), lut.size(), delta);
The code has hundreds of high precision constants. Some even seem to be set to nonsensical values (like kGamma to 0.38) Where did all of them come from? The real science here seems to be the method by which those constants were chosen, and I see no details how it was done.

Upon more investigation, these numbers are certainly machine generated. Here is an example:

A constant lookup table is used for determining the importance of a change vs distance. Seperate tables are used for vertical and horizontal distances (I guess eyes might be slightly more sensitive to vertical edges than horizontal ones?).

Those tables are wildly different in magnitude:

    static const double off = 1.4103373714040413;  // First value of Y lookup table
    static const double off = 11.38708334481672;   // First value of X lookup table

However, later on, when those tables are used, another scale factor is used (simplified code):

    static const double xmul = 0.758304045695;
    static const double ymul = 2.28148649801;
The two constant scale factors directly multiply together, so there is no need for both. No human would manually calculate to 10 decimal places a number which had no effect. Hence, my theory is these numbers have been auto-generated by some kind of hill climbing type algorithm.

Yeah, this looks like an optimizer wrote the program. I presume the code was tested against natural images... so it might not be appropriate for all image types.

See fig. 2.1 here:


and also read here:


That is my quick guess on how to roughly derive the constants (because it is new, probably there are some fancy modifications tho :) )

This kind of constant appears naturally when you approximate some computation, a famous example being Gaussian quadrature (look at x_i values depending on the "precision" you want : https://en.m.wikipedia.org/wiki/Gaussian_quadrature )

I don't know if this code is related to that but just pointing out that seemingly nonsensical constants may appear more than one would thing.

So... machine learning? (Sorry for buzz-wording)

It is old school: 100000+ cpu hours of Nelder-Mead method (+common tricks) to match butteraugli to a set of 4000 human rated image pairs created with an earlier version of Guetzli and specially-built image distortion algorithms.

How did you protect against overfitting? How about local maxima? Some of your constants look surprising to say the least.

Most notably:

* The gamma value of 0.38 (when most studies suggest 1.5 - 2.5 for human eye gamma)

* The significant difference in the vertical and horizontal constants (when as far as I know human eyes are equally sensitive to most distortions independant of angle).

There was a large variety of regularization and optimization techniques used. An embarrassingly large amount of manual work went into both.

In this use the gamma is the inverse of something close to 2.6. Butteraugli needs both gamma correction and inverse gamma correction.

The FFT co-efficients only look weird, but they should actually lead to a symmetric result if our math is correct. In a future version we move away from the FFT, so I don't encourage anyone to actually debug that too much.

> In a future version we move away from the FFT

Do you have any idea when that will land in the open? Say, before 2018? Or maybe a little sooner?

Very confused as to why this was downvoted?

Nothing to be embarrassed about, you got results. Results are what matters in the end.

This is even more the case with software. If this cost 20 or 20 million man hours is irrelevant in the long run because eventually it will be used enough to offset costs if it is as good as every one says. Of course short term cost hurt now, but sounds like you still had computers do much of the heavy lifting.

To those who didn't notice, that's one of the authors, well also took large part in previous related work.

Very nice.

One question: as the top-level comment in this thread noted, this algorithm may be specialized to certain kinds of images.

Can you release info about the kind of images/datsets this approach will pathologically fail with? That would be really really awesome.

I've seen this done in signal processing domain - someone goes to Matlab, creates a filter or other transformation there and then presses a button and it spits out a bunch of C code with constants looking like that. So they probably did that same thing.

As the author of the original libjpeg (back in 1991), I think this has been a long time coming! More power to Google.

Thank you for giving such a present for all of us! JPEG was in my opinion really ahead of its time, is still impressive, and many of the engineering compromises between simplicity and efficiency are just brilliant.

>More power to Google.

Careful what you wish for!

Thanks so much for libjpeg. You totally rock.

Thank you for writing such beautiful software.

Impressive :)

What are you on these days ? image codecs still ?

No -- computer security -- which is a fascinating field. There are arms races here between the good people and the bad people. It isn't clear that the forces of good are winning. It is a Red Queen problem.

The original libjpeg code was written to try and change the Usenet News binary pictures groups over from GIF to JPEG (so that more images would fit down the rather narrow transatlantic pipe that I had at the time). The choice of license turned out to be a good one (it predated the GPL V2) -- who knows what would have happened if we (the precursor to the IJG) had chosen that one.

Were you already security saavy before or did you learn on the spot ? It's as exciting as scary, and I wouldn't touch it with a ten foot pole :)

(thanks for the answer btw)

I'm working on a similar thing (http://getoptimage.com). While Guetzli is still visually better and a bit smaller in file size, it's terribly slow and requires a lot of memory. But it's a great experiment. So much knowledge has been put into it.

I believe using a full blown FFT and complex IQA metrics is too much. I have great results with custom quantization matrices, Mozjpeg trellis quantization, and a modification of PSNR-HVS-M, and there's still a lot of room for improvement.

> it's terribly slow and requires a lot of memory.

...and generates a solution that uses far less bandwidth, especially after thousands or millions of hits, which is the real point of the exercise.

Cloud computing companies love this. They've got a lot of bored hardware to put to use. It's absolutely no surprise to see solutions like this coming from Google. Spending a dollar of compute time to save $1000 in bandwidth is a no-brainer win for a company with a million servers.

At the image upload rate nowadays, there's a place for a practical solution, and for any external company, that kind of cloud computing will cost a fortune.

Have you noticed a lot of image results from a Google search are actually .webp, probably also for that reason

Would it be possible to accelerate Guetzli on a GPU?

It's block-based so definitely yes.

It appears that FFT can be GPU accelerated. Nvidia has cuFFT which claims to be 10x faster.

I'd expect this to behave quite differently to cuFFT: the transforms are likely to be small (either length 8 1D FFTs, or 8x8 2D FFTs) and thus synchronisation overhead is likely to dominate if one was to try to parallelise within a transform (other than via SIMD). However, this small size does mean that the transforms can be written out to have "perfect" data transfer and branching behaviour, so that they parallelise well at JPEG's natural parallelisation granularity (the 8x8 pixel blocks).

Any plans for a Wordpress plugin like TinyPNG has? We use that currently but TinyPNG's JPG output leads to visible pixilation.

Try out the https://kraken.io plugin. Kraken.io's optimization is more about fidelity to the original than maximising file size savings.

Can vouch for Kraken.io -- I've just begun using it to optimize ~50k+ interior architectural images so fidelity is important to me and my users. I'm about a third of the way done with the project and am saving just about 40% in size on average.

I'm still thinking on the best way to implement it. Right now you can mount an FTP/SFTP folder and process from there. But I might try to bypass the mounting part. Server version is also in the works.

Google's implementation may be slower, but it's open source.

I'll run some of my own experiments on this today, but I'm initially concerned about color muting.

Specifically looking at the cat's eye example, in the bottom of the pupil area there's a bit of green (reflection?) in the lower pupil. In the original it is #293623 (green) - in the libjpeg it is #2E3230 (still green, slightly muted). But in the Guetzil encoded image it is #362C35 - still slightly green but quite close to grey.

In my experience people love to see colors "pop" in photos (and photography is where JPEG excels) - hopefully this is just an outlier and the majority of compressions with this tool don't lose color like this.

In general if you want to avoid any color changes in blobs a few pixels in size, you’ll want to take it easy on the compression, and take the hit of a larger file size in trade.

I suspect that if you give this algorithm twice the file size as a budget, that green color will come back.

I agree, giving more file size may get us our colors back. And after some experimentation I'd like to be able to confirm something a bit abstract like, "Guetzli is good for reducing artifacts by sacrificing color" or some such snippet.

It would definitely have its uses as such. Or maybe it's great all around and I just found one bad example?

Guetzli sacrifices some chromaticity information, but tries to keep that in balance with the intensity loss. Guetzli sacrifices colors much less than the commonly used YUV420 mode -- the common default mode in JPEG encoding.

Agree, that difference is very striking side by side to me at least. Maybe not to everyone, but I hope they have some people with very good color perception on their team so they'll be able to see the difference.

Some comparison with the mozjpeg encoder here: https://github.com/google/guetzli/issues/10


> We didn't do a full human rater study between guetzli and mozjpeg, but a few samples indicated that mozjpeg is closer to libjpeg than guetzli in human viewing.

Sort of related, but what's the story with fractal image compression? When I was at university (~20 years ago) there was a lot of research going in to it, with great promises heralded for web-based image transfer. There was a Netscape plugin that handled them. They seemed to just disappear in the early 2000s.

I think this paper was part of it, which showed that the advantages of fractal image compression could equivalently be achieved with smooth wavelets and efficient representation of zerotrees, except with a lot more speed and flexibility (sorry no PDF): https://www.researchgate.net/publication/5585498_A_wavelet-b...

> sorry no PDF

No directly-linkable PDF :P


Seems to have been patent-encumbered and largely abandoned in the commercial world, but there is a FOSS library called fiasco (.wfa files) that is included with netpbm, available on most *NIX systems.



New formats are generally a huge pain in the media worlds... Huge companies like Google have been trying to get webp for years and it's still not there, and it's also why they're still putting so much effort into png/jpg.

Wavelets have horrific local texture artifacts: think a random patch of grass having detail and the rest not.

It's been a while sense I read about fractal image compression, but way back when, I read quite a bit about it. The impression I got about the algorithms I read about was that a) the term "fractal" seemed like a bit of a stretch, and b) they were a neat hack for building and exploiting a statistical model of an image, but the promises that were getting thrown around were pretty overblown. My guess is that these techniques would be blown out of the water by modern CNN-based methods.

My impression was that a fractured landscape of patents and some encoding performance issues doomed fractal image compression. Add rapidly expanding bandwidth and the need for better image compression dried up. DCT and related techniques were good enough to make it a tough market to enter and compete with. I could be way off base though.

Cool, but neither the article nor the paper (https://arxiv.org/pdf/1703.04416.pdf) mention just how much slower it is.

It's about 1 MPixel / minute on a typical desktop. The other paper mentions that it's extremely slow, but truly we did forget to give actual numbers there.

Worth noting that a great deal of lossy image compression methods currently in use were developed in the mid to late 1990s, when a 90 MHz Pentium was an expensive and high end CPU. Spending CPU time to do the one time compression of a lossless image to lossy is not as expensive in terms of CPU resources as it used to be.

To share my experience, I tried today with a 50mpx image: it took me 1 full hour and it constantly used 10% of my CPU. But the quality was great (even sharper?)!

The Github README says, "Guetzli generates only sequential (nonprogressive) JPEGs due to faster decompression speeds they offer." What's the current thinking on progressive JPEGs? Although I haven't noticed them recently, I don't know whether they're still widely used.

As a user, I dislike progressive images because of the ambiguity regarding when they've finished loading.

As a user, I love progressive images because I can see the whole image very fast. (I don't have to wait for it)

I use progressive JPEGs if they have a smaller filesize, which is true most of the time.

mozjpeg also generates progressive JPEGs by default, for the same reason.

I don't really understand why they would be slower to decode. It's really just the same data ordered differently in the file.

I can see that if you try to render an incomplete file you might end up "wasting" effort blitting it to the screen and stuff before the rest of the data is decoded. But if thats a concern, one can simply rearrange the data back to scanline order and decode as normal?

They are slower to decode mostly due to decreased cache locality. In sequential JPEGs you read one block worth of data, make pixels out of it, and write the pixels out. In progressive encoding, you need to write the pieces of coefficients back to memory at every scan -- the whole image won't fit into cache -- so there's one more memory roundtrip for every pixel. Also, there's just more symbols to decode in total.

Wow, "webmasters creating webpages" is something that I haven't heard for a very long time! I'm nostalgic.

Lots of Swiss German coming from Google lately. Zöpfli, Brötli and now Guetzli.

I'm still hoping for a Google Now that understands Swiss German :)

There's a huge Google lab in Zurich [1], probably that's why.

[1] https://careers.google.com/locations/zurich/

I'm a bit sad but I guess I could have seen it coming ;-).


Although Zurich's software engineers are as expensive as those in the Bay area, Google employs 1500 of them and they bought a new building next to the main train station to hire hundreds of new ML / AI experts.

(Full disclosure: I am a programmer and I try to match programmers with Zurich's startups for a living.)

So, if you want to move to Zurich, you find my e-mail address in my HN-handle. Read more about Switzerland in my semi-famous blogpost "8 reasons why I moved to Switzerland to work in tech": https://medium.com/@iwaninzurich/eight-reasons-why-i-moved-t...)

I'm Swiss German and don't understand half of the Swiss German dialects. Swiss German must be the ultimate hard case for machine translation.

I'm assuming it's because the work originated in their R&D labs in Zurich?

I wonder, does google's blog pick up that I can't read their web page due to javascript blocking? Do they evaluate how many readers are turned away due to such issues?

Larry Page doesn't care, but it does keep Sergey up at night.

I wonder how Dropbox's Lepton[1] compresses JPEGs encoded using Guetzli. Since they already pack more info/byte would there be noticeable compression?

Someone out there must have tried this.


I'd expect it to still save >20%. Lepton uses arithmetic encoding instead of Huffman (-10%), and predicts each 8x8 block based on its neighbors (-20%). Guetzli shouldn't interfere with either of these.

When we pack guetzlified JPEGs we see slightly smaller wins than with stock JPEGs. Think -14% for guetzlified JPEGs vs. -20% libjpeg JPEGs.

I use ImageOptim (https://imageoptim.com) for small tasks. For larger tasks, https://kraken.io is nuts.

Why spend a lot of time improving jpeg instead of spending time promoting a HEVC-based standard like that one? http://bellard.org/bpg/

Install base. JPEG has already been promoted and is everywhere. If it's not a pain to better use what's already out there, why would one want to support another format (with all its code bloat, security, and legal implications) indefinitely?

If anything, they would spend the time to make it AV1-based, which apparently was expected to come out this month:



Nope, they pushed the expected release date to end of this year[1]. I was really hoping it would come out this month too, then I realized we won't see much adoption till 2019. 2018 will be spent with a couple releases of software decoders, and some adoption, and 2019 is when the hardware decoders will released. Which is when we can expect everyone to more to AV1.

But I'm still skeptical because HEVC might be more widespread with hardware decoders everywhere and people might just not care enough to move to the new standard. Unless MPEG-LA exploits its dominance really bad with license fees, then we can expect MPEG codecs to die off. Although I think x264 will still live.

[1]: https://fosdem.org/2017/schedule/event/om_av1/attachments/sl...

(4th slide. This codec will be standardized after the experiments are removed and it is frozen to test software.)

The problem with HEVC is HEVC Advance. A second patent pool that appeared 2 years ago.

Not just HEVC Advance. There are also some patent holders who aren't members of any pool, like Technicolour.

I would've preferred it if AV1 was 2x better than HEVC, but I don't think even HEVC was 2x better than h.264. So if they can achieve at least 50% in all tests against HEVC, it may be a good enough improvement to the point where companies like Netflix, Amazon, Facebook and Twitter adopt it, as well as tv show and movie torrent uploads (which has its own impact on codec adoption). Plus, YouTube is a given, and it's nice to hear that even Apple may adopt it and that Apple is actually going back on promoting HEVC support.

I agree that HEVC isn't a game changer.

I actually prefer x264 encoded video even though it results in much larger file sizes. Although HEVC has lower bitrate for supposedly same quality, my needs aren't constrained enough where I have to go for HEVC.

I hope AV1 doesn't lower the quality aspect since it is focused on being a primarily streaming codec with most of the companies being streaming focused (Google, Netflix, Amazon and Vidyo) and them focusing on better compression rate.

If I recall correctly HEVC was also meant to be a streaming codec and I feel like that lead to the lower quality compared to h264. It just doesn't feel that sharp although it is supposed to be 1080p. The blurring aspect is especially bad.

I don't think AOMedia guys will blow it though. I feel like they have a lot more expertise since there are people from multiple codecs (VP9, Daala, etc.) contributing to this.

Because patents.

And slowness of cross-browser deployment. Both for stupid and very valid reasons.

As an example of the latter:

I think Opera Mini (which I ran the development of for its first decade) still has somewhere around 150-200 million monthly unique users, down from a peak of 250M. Pretty much all of those users would be quite happy to receive this image quality improvement for free, I think. (Assuming the incremental encoding CPU cost isn't prohibitive for the server farms.) Opera Mini was a "launch user" of webp (smartphone clients only) for this particular reason.

Many of those users devices are Nokia/Sony Ericsson/etc J2ME devices with no realistic way of ever getting system-level software updates. They are still running some circa 2004 libjpeg version to actually decode the images. It's still safe because the transcoding step in Opera Mini means that they aren't exposed to modern bitstream-level JPEG exploits from current web, but it underscores why any improvements targetting formats like JPEG is still quite useful.

Opera Mini for J2ME actually includes a very tiny (like 10k iirc) Java-based JPEG decoder since quite a few devices back then didn't support JPEG decoding inside the J2ME environment. It's better than having to use PNG for everything, but because it's typically like 5x-10x slower than the native/C version even in a great JVM of the time it really only makes sense to use as a fallback.)

Google already has a video-codec-based image format: WebP.

bpg will never get anywhere due to being patent encumbered.

There is no way we will start paying royalties to show images on the web.

This really looks great. I really wish the author(s) could provide a detailed overview of the human vision model algorithm being implemented, what it is doing and why, so we could reproduce an implementation, may be even provide improvements? Otherwise amazing work.

This Swiss naming of algorithms really gets old, especially if you speak (Swiss) German...

Nice Makefile, jirki. I really have to look into premake which generated it.

But I get a gflags linking error with 2.1 and 2.2 with -DGFLAGS_NAMESPACE=google. This is atrocious. "google::SetUsageMessage(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)" Without this it works fine.

Guess I still have some old incompat gflags headers around.

EDIT: The fix on macports with 2.2 installed default into /usr/local is: make CXX=g++-mp-6 verbose=0 LDFLAGS="-L/usr/local/lib -L/opt/local/lib -v" CFLAGS=-I/usr/local/include

i.e. enforce gflags 2.2 over the system 2.1

Great tech, shame about the name

Is JPEG2000 with progressive/resolution-responsive transcoding still a thing or is HTML <picture> the way to go for responsive images (or maybe WepP)?

JPEG2000 died a bitter patent death.

Actually, it's alive and well in the form of embedded images in PDFs, where it's known as JPXFilter. Most of the ebooks (scans) I've downloaded from archive.org use it. If it didn't have any huge advantage I doubt they would've chosen it over standard JPEG.

The real problem, as far as I can see, is that JPEG2000 is really slow to decode due to its complexity.

Modern implementations are decent but there's no open source implementation in that class. OpenJPEG is improving but it's much slower than Kakadu (which is what CoreImage uses) or Aware.

shameless plug for my JPEG 2000 codec: https://github.com/GrokImageCompression/grok . Performance currently around 1/3 Kakadu

Heh, yes - I've been following that since you announced it. It'd be really nice if we could start getting the OSS toolchain onto a JP2 implementation with decent performance — I think jasper really soured the reputation for many people.

The PDF reader is pretty interesting, because don't modern browsers ship with PDF readers? To decode the embedded JPEG2000s, the browser has to be able to decode them right?

PDFs are packaged Postscript. Postscript is a Turing complete language and the output of a script is a page.

PDFs with JPEG2000 images contain a Postscript library to decode these images. That is the code that gets to show them.

Any time you watch a movie at the cinema, JPEG 2000 is in use, decoding the film.

Doesn't Safari support JPEG2000?

So far the results have been useful for me. It's been able to reduce size on some tougher images that other optimizers would ruin quality-wise.

How relevant to web pages is this?

The blog makes it sound like that's the target but the paper has this line:

"Our results are only valid for high-bitrate compression, which is useful for long-term photo storage."

Do the author's think the size/quality benefits still show up when targetting lower bitrates/qualities that are more common on the web? Do they intend to try to prove it?

Quality-wise, Guetzli is applicable to about 50% of the JPEG images in the internet. The other half is stored with lower than 85 quality, and Guetzli declines to attempt to compress to that quality.

Another limitation is that Guetzli runs very slowly. This gives a further limiting axis: Guetzli in its current form cannot be applied to a huge corpus of images. Perhaps this covers half of the images on the internet.

So, let's say that Guetzli is 25% relevant to the web pages.

if it may benefit anyone, i used this simple batch file to test out the lower bound -quality 84 with drag and drop of one/multiple images on win 86-64:

  echo Drag and drop one or multiple jpg or png files onto this batch file, to compress with google guetzli using a psychovisual model
  if [%1]==[] goto :eof
  echo compressing...
  guetzli_windows_x86-64.exe -quality 84 %1 "%~dpn1_guetzlicompressed%~x1"
  if not [%1]==[] goto loop
  echo DONE
i'm concerned about the color changes that are clearly visible throughout the whole image.

Guetzli runs the whole image to the same specified quality -- when artefacts start to emerge, they are everywhere, not just in one sensitive area. Guetzli can be more likely worth its cpu cost at a higher quality, around 92-96.

Very cool. I'm not an expert, but does JPEG generally have a ton of flexibility in compression? Why so much difference in sizes?

Three main methods:

1) YUV420 vs YUV444. Guetzli practically always goes for YUV444.

2) Choosing quantization matrices.

3) After normal quantization, choose even more zeros. JPEG encodes zeros very efficiently.

When doing the above, increase the errors where it matters least (certain RGB values hide errors in certain components, and certain types of visual noise hides other kind of noise).

Step 3), choosing more zeros, can be generalized with trellis quantization, which does a search for the best values to encode for each block for the best distortion-per-rate score, where distortion can be any metric (edit: apparently guetzli does some sort of whole frame search for this). mozjpeg does trellis with effectively the PSNR-HVS metric. Because the other two steps are only one setting that affects the entire picture, I do wonder how Guetzli would perform if it was just a wrapper around mozjpeg.

Yes, JPEG encoding has a ton of flexibility. You rearrange each block of pixels using the discrete cosine transform, which tends to pack more significant values towards one corner, and then you have lots of freedom over how to quantize those values. See https://en.wikipedia.org/wiki/JPEG#Quantization

On top of that, you could tweak the quantized values themselves to make them more compressible.

There's less flexibility than you might think - you get only one choice of quantizer and quantization matrix for the entire frame. So pretty much your only option is to twiddle the values themselves. This is usually done with trellis quantization, such as in mozjpeg. Guetzli seems to implement something simpler that just sets increasing numbers of coefficients to zero (based on my cursory reading of the source code).

I'm afraid Guetzli is quite a lot more complex. It does a global search on this, i.e., quantization decisions in neighboring blocks may impact the quantization decisions on this block. Also, quantization decisions have cross-channel impact between YUV channels.

There is no block to block prediction other than DC prediction, so is this effect due to your distortion function spanning multiple blocks? Same for cross YUV channels, because your metric is in RGB space?

edit: second read-through I found the paper [1] which explains it. The answer is basically "yes", where the large scale distortion function is basically activity masking. Normally this would be implemented with delta-QPs, but because JPEG doesn't have that, Guetzli uses runs of zeroes instead.

[1] https://arxiv.org/pdf/1703.04421

This comes through the internal use of butteraugli -- and depending the quantization decisions on butteraugli.

Butteraugli uses a 8x8 FFT, but computes this every 3x3 pixel creating coverage at block boundaries. In later stages of butteraugli calculation values are aggregated from an even larger area. Block boundary artefacts are taken into account by this and impact quantization decisions.

Butteraugli operates neither in RGB nor YUV. It has a new color space that is a hybrid of tri-chromatic colors and opponent colors. Black-to-yellow and red-to-green are opponent, but blue is modeled closer to tri-chromatic. In more simple explanation it is possible to think of it as follows: first apply inverse gamma correction, second apply a 3x4 transform for rgb, third apply gamma correction, fourth calculate r - g, r + g and keep blue separate.

Do you have / plan a paper describing butteraugli itself?

It seems like that's where most of the magic lies. Also peculiarities of human vision are one of my oddball interests, after compression of course. :)

+1 I would be very interested in reading about butteraugli, if there is anything documented.

The more bits you're willing to lose during quantization, the more zeroes the resulting bitstream will have, the better it will compress.

The more bits you lose during quantization, the more ringing and artifacts you can expect after the IDCT process.

So the tradeoff is quite literally artifacts for smaller size.

this compressor seems to be cleverer about where to lose data than libjpeg.

Are there any plans to make Guetzli available as a library for iOS and Android? Would be great to process images right on device with this level of compression.

Just tried the precompiled binary from:


and I'm getting "Invalid input JPEG file" from a lot of images unfortunately.

Sorry for that. See the following for a likely reason and workaround:


Thanks, I'll give it a try :)

Nice work. And yet google images still has horribly compressed low resolution thumbnails...

How does it compare to mozjpeg?

Does it support 12-bit jpeg?

I am not an expert, but AFAIK JPEG is a lossy format. The comparison is purely based on the size, and I couldn't find anything in the paper about data loss compared to other encoders. Can someone please explain why is this a fair comparison?

We did an experiment with comparisons of Guetzli and (slightly larger) libjpeg output: https://arxiv.org/abs/1703.04416 Turns out that 75% of the 614 ratings are in favor of the Guetzli version.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact