Hacker News new | past | comments | ask | show | jobs | submit login
Jpegli: A new JPEG coding library (googleblog.com)
353 points by todsacerdoti 10 months ago | hide | past | favorite | 249 comments



JPEGLI = A small JPEG

The suffix -li is used in Swiss German dialects. It forms a diminutive of the root word, by adding -li to the end of the root word to convey the smallness of the object and to convey a sense of intimacy or endearment.

This obviously comes out of Google Zürich.

Other notable Google projects using Swiss German:

https://github.com/google/gipfeli high-speed compression

Gipfeli = Croissant

https://github.com/google/guetzli perceptual JPEG encoder

Guetzli = Cookie

https://github.com/weggli-rs/weggli semantic search tool

Weggli = Bread roll

https://github.com/google/brotli lossless compression

Brötli = Small bread


Google Zürich also did Zopfli, a DEFLATE-compliant compressor that gets better ratios than gzip by taking longer to compress.

Apparently Zopfli = small sweet breat

https://en.wikipedia.org/wiki/Zopfli


Zopf means “braid” and it also denotes a medium-size bread type, made with some milk and glazed with yolk, shaped like a braid, traditionally eaten on Sunday.


They should do XZli next :D

And write it in Rust


All of the data transformation (codecs, compression etc.) libraries should be in WUFFS. That's exactly what it's for, and unlike the C++ this was written in, or indeed Rust, it's able to provide real compile time safety guarantees for the very affordable price of loss of generality (that is, you can't use WUFFS to write your video game, web browser, word processor, operating system or whatever)

For example in C++ array[index] has Undefined Behaviour on a bounds miss. Rust's array[index] will panic at runtime on a bounds miss, at least we know what will happen but what happens isn't great... WUFFS array[index] will not compile if it could incur a bounds miss. Show the compiler why index will be a value that's always in-bounds when the index occurs.


It appears that XZ actually is in WUFFS!

https://github.com/google/wuffs/tree/main/std/xz


Yeah, it's just a coincidence (†), but I started working on Wuffs' LZMA and XZ decoders last December. It works well enough to decode the Linux source code tarball correctly (producing the same output as /usr/bin/xz).

    $ git clone --quiet --depth=1 https://github.com/google/wuffs.git
    $ gcc -O3 wuffs/example/mzcat/mzcat.c -o my-mzcat
    $ ./my-mzcat     < linux-6.8.2.tar.xz | sha256sum 
    d53c712611ea6cb5acaf6627a84d5226692ae90ce41ee599fcc3203e7f8aa359  -
    $ /usr/bin/xz -d < linux-6.8.2.tar.xz | sha256sum 
    d53c712611ea6cb5acaf6627a84d5226692ae90ce41ee599fcc3203e7f8aa359  -
(†) Also, I'm not "Jia Tan"! You're just going to have to trust me on both of those claims. :-/


> Also, I'm not "Jia Tan"! You're just going to have to trust me on both of those claims. :-/

No need to trust – it's actually easily verified :) Your activity pattern (blue) is entirely different than jia tan's (orange): https://i.k8r.eu/vRRvVQ.png

(Each day is a row, each column is an hour in UTC. A pixel is filled if a user made a commit, wrote a comment, etc during that hour)


So, if one person were to login to one account for a certain time, and then switch accounts for a few hours... Hmmm :o)


Then they'd still need to sleep at some time ;)


Complete a bunch of activities over a duration of 8 hours. Have a piece of software that relays all of those activities at a slower rate over the span of the next 24 hours.


I've already written about just that over in the xz thread a few days ago: https://news.ycombinator.com/item?id=39893388

> Is it possible? Definitely. But that's extremely rare, especially if you want to keep a relatively natural pattern for the commits and replies.

> You'd basically have to have a team of devs working at really odd times and a queuing system that automatically queues all emails, github interactions, commits, etc to dispatch them at correctly distributed timestamps.

> And you'd need a source pattern to base your distribution on, which is hard to correctly model as well.

Note that while what you suggest is possible, it'd become visible if you look at issues, questions, emails, etc sent to the project author and how long it took for the author to reply to them. If you plot this reply delay by the hour of day that the message arrived, a pattern emerges.


The xz backdoor was not about safety. Nor was it really about compilation or compile time checks -- they slipped an extra object file to the linker.


You're right that Wuffs' memory-safety isn't relevant for this attack.

Still, Wuffs doesn't use autotools, and if you're pulling the library from the https://github.com/google/wuffs-mirror-release-c repository then that repo doesn't even contain any binary-data test files.


Brotli:11 gets within 0.6 % of LZMA density but decodes 3–5x faster.


Yeah. But it seems to be most widely used in web browsers.

I’ve never seen a .tar.br file, but I frequently download .tar.xz files.

And therefore, a Rust implementation by Google of xz compression and decompression would be most welcome :)


> The suffix -li is used in Swiss German dialects

Seems similar to -let in English.

JPEGlet

Or -ito/-ita in Spanish.

JPEGito

(Joint Photographers Experts Grupito)

Or perhaps, if you want to go full Spanish

GEFCito

(Grupito de Expertos en Fotografía Conjunta)


https://en.wikipedia.org/wiki/List_of_diminutives_by_languag... lists many more that “could be seen as diminutives”, at least some of which a were fairly recently used in forming new words (examples: disk ⇒ diskette, computer ⇒ minicomputer)


Or JPEGchen in high German


Or JPEGle in Swabian German. -le as in left, not as in Pebble


Or JPEGito in Spanish


We already said that one :D


JPEGinen or JPEGli in Finnish


Or JPEGino in Italian


Interesting, I was expecting there to be some connection to the deblocking jpeg decoder knusperli.


That would give additional savings.


Their claims about Jpegli seem to make WebP obsolete regarding lossy encoding? Similar compression estimates as WebP versus JPEG are brought up.

Hell, I question if AVIF is even worth it with Jpegli.

It's obviously "better" (higher compression) but wait! It's 1) a crappy, limited image format for anything but basic use with obvious video keyframe roots and 2) terribly slow to encode AND 3) decode due to not having any streaming decoders. To decode, you first need to download the entire AVIF to even begin decoding it, which makes it worse than even JPEG/MozJPEG in many cases despite their larger sizes. Yes, this has been benchmarked.

JPEG XL would've still been worth it though because it's just covering so much more ground than JPEG/Jpegli and it has a streaming decoder like a sensible format geared for Internet use, as well as progressive decoding support for mobile networks.

But without that one? Why not just stick with JPEG's then.


> Their claims about Jpegli seem to make WebP obsolete regarding lossy encoding? Similar compression estimates as WebP versus JPEG are brought up.

I believe Jpegli beats WebP for medium to high quality compression. I would guess that more than half of all WebP images on the net would definitely be smaller as Jpegli-encoded JPEGs of similar quality. And note that Jpegli is actually worse than MozJPEG and libjpeg-turbo at medium-low qualities. Something like libjpeg-turbo q75 is the crossover point I believe.

> Hell, I question if AVIF is even worth it with Jpegli.

According to another test [1], for large (like 10+ Mpix) photographs compressed with high quality, Jpegli wins over AVIF. But AVIF seems to win for "web size" images. Though, as for point 2 in your next paragraph, Jpegli is indeed much faster than AVIF.

> JPEG XL would've still been worth it though because it's just covering so much more ground than JPEG/Jpegli and it has a streaming decoder like a sensible format geared for Internet use, as well as progressive decoding support for mobile networks.

Indeed. At a minimum, JXL gives you another 20% size reduction just from the better entropy coding.

[1] https://cloudinary.com/blog/jpeg-xl-and-the-pareto-front


> I would guess that more than half of all WebP images on the net would definitely be smaller as Jpegli-encoded JPEGs of similar quality.

That was what I expected a long time ago but it turns out to be a false assumption. According to Google with data from Chrome. 80%+ of images on the web are bpp 1.0+.


> And note that Jpegli is actually worse than MozJPEG and libjpeg-turbo at medium-low qualities. Something like libjpeg-turbo q75 is the crossover point I believe.

May I ask how you came to this conclusion?

The Cloudinary article appears to show jpegli beating mozjpeg and turbojpeg even at the "Medium" setting (less bits per pixel).


MozJPEG not being more precise than Jpegli at mid or lower qualities also matches with our experiments, both internal uncontrolled experiments and the rater study we published.

There was an earlier version which was not very good at medium or low quality, but Zoltan fixed that about six months ago.


Sharing similar view. I even go as far as to say jpegli ( and the potential with XYB ICC ) makes JPEG XL just not quite good enough to be worth the effort.

The good thing is that the author of XL ( Jyrki's ) claims there are potential of 20-30% bitrate savings at the low end. So I hope JPEG XL encoder continues to improve.


You can always use JPEG XL lossless JPEG1 recompression to get some savings in the high end quality, too — if you trust the quality decision heuristics in jpegli/guetzli/other jpeg encoder more than the JPEG XL encoder itself.

We also provide a ~7000 lines-of-code libjxl-tiny that is more similar to jpeg encoders in complexity and coding approach, and a great starting point for building a hardware encoder.


>JPEG XL lossless JPEG1 recompression

This reminded of something. I so wish iOS 18 could support JPEG XL out of the box rather than Safari only. I have 80GB of Photos on my iPhone. Vast Majority of them were sent over by WhatsApp ( JPEG ). If iOS could simply recompress those into JPEG XL I would instantly gain ~10GB+ of storage.


The Photos app supports JPEG XL as far as I’ve been able to tell.


What happens if you recompress them losslessly manually to JPEG XL?


It's not just about the compression ratio. JPEG XL improvements in generational loss are reason enough that it should be the default format for the web.


Yes, I agree and I think there is a hurdle in mucking with file formats alone because it always affects interoperability somewhere in the end. I think this also needs to be accounted for - the advantages need to outweigh this downside because it is a downside. I still kind of want JPEG XL but I'm starting to question how much of it is simply due to me being a geek that want tech as good as possible rather than a pragmatic view on this, and I didn't question this as much before Jpegli.


It can be a question when your uncle's/daughter's/etc phone is full of photos and they asks for advice on how to make more space.

It can be a question of if the photo fits as an email attachment etc.

'Zillions' of seconds of aggregate latency waiting time is spent each day on waiting for web sites to load. Back-of-the-envelope calculations can suggest that the value of reducing waiting time can be in hundreds of billions over the whole time of the deployment. Bandwidth cost to users and energy use may also be significant factors.


> In order to quantify Jpegli's image quality improvement we enlisted the help of crowdsourcing raters to compare pairs of images from Cloudinary Image Dataset '22, encoded using three codecs: Jpegli, libjpeg-turbo and MozJPEG, at several bitrates.

Looking further [1]:

> It consists in requiring a choice between two different distortions of the same image, and computes an Elo ranking (an estimate of the probability of each method being considered higher quality by the raters) of distortions based on that. Compared to traditional Opinion Score methods, it avoids requiring test subjects to calibrate their scores.

This seems like a bad way to evaluate image quality. Humans can tend towards liking more highly saturated colours, which would be a distortion of the original image. If it was just a simple kernel that turned any image into a GIF cartoon, and then I had it rated by cartoon enthusiasts, I'm sure I could prove GIF is better than JPEG.

I think that to produce something more fair, it would need to be "Given the following raw image, which of the following two images appears to better represent the above image?" The allowed answers should be "A", "B" and "unsure".

ELO would likely be less appropriate. I would also like to see an analysis regarding which images were most influential in deciding which approach is better and why. Is it colour related, artefact related, information frequency related? I'm sure they could gain some deeper insight into why one method is favoured over the other.

[1] https://github.com/google-research/google-research/blob/mast...


The next sentence says "The test subject is able to flip between the two distortions, and has the original image available on the side for comparison at all times.", which indicates that the subjects weren't shown only the distortions.


The change was literally just made: https://github.com/google-research/google-research/commit/4a...

It appears this was in response to Hacker News comments.


One other thing to control for is the subpixel layout of their display which is almost always forgotten in these studies.


I did think about this - but then I thought the variation in displays/monitors and people would enhance the experiment.


> Humans can tend towards liking more highly saturated colours, which would be a distortion of the original image.

android with google photos did/does this whereas apple went with enhanced contrast.

as far as i can tell, they're both wrong but one mostly notices the 'distortion' if used to the other.


This is the kind of realm I'm fascinated by: taking an existing chunk of something, respecting the established interfaces (ie. not asking everyone to support yet another format), and seeing if you can squeeze out an objectively better implementation. It's such a delicious kind of performance gain because it's pretty much a "free lunch" with a very comfortable upgrade story.


I agree, this is a very exciting direction. We shouldn’t let existing formats stifle innovation, but there is a lot of value in back porting modern techniques to existing encoders.


Looks like it’s not very competitive at low bitrates. I have a project that currently encodes images with MozJPEG at quality 60 and just tried switching it to Jpegli. When tuned to produce comparable file sizes (--distance=4.0) the Jpegli images are consistently worse.


What is your use case for degrading image quality that much? At quality level 80 the artifacts are already significant.


Thumbnails at a high pixel density. I just want them up fast. Any quality that can be squeezed out of it is a bonus.


JPEG has a fixed macroblock size (16x16 pixels), which negatively affects high resolution low bitrate images.

If you must use JPEG, I suspect you might get better visual quality by halving the resolution and upsampling on the client.

By doing so, you are effectively setting the lower and right halves of the DCT to zero (losing all high resolution info), but get to have 32x32 pixel macroblocks which lets you better make use of low frequency spacial patterns.


Oh, that's interesting. I typically serve thumbnails at 2x resolution and heavily compressed. Should I try to instead compress them less but serve at 0.5x resolution?


I'd say it's worth a try.


I recently noticed that all the thumbnails on my computer are PNG, which I thought was funny.


Thumbnails. I typically serve them at 2x resolution but extremely heavily compressed. Still looks good enough in browser when scaled down.


I apologize that this will seem like, well it IS frankly, more reaction than is really justified, sorry for that. But this question is an example of a thing people commonly do that I think is not good and I want to point it out once in a while when I see it:

There are infinite use-cases for everything beside one's own tiny personal experience and imagination. It's not remarkable that someone tested for the best version of something you personally don't have a use for.

Pretend they hadn't answered the question. The answer is it doesn't matter.

They stated a goal of x, and compared present-x against testing-x and found present-x was the better-x.

"Why do they want x when I only care about y?" is irrelevant.

I mean you may be idly curious and that's not illegal, but you also stated a reason for the question which makes the question not idle but a challenge (the "when I only care about y" part).

What I mean by "doesn't matter" is, whatever their use-case is, it's automatically always valid, and so it doesn't change anything, and so it doesn't matter.

Their answer happened to be something you probably agree is a valid use-case, but that's just happenstance. They don't have to have a use-case you happen to approve of or even understand.


I believe that they should be roughly the same in a photography corpus density at quality 60. Consider filing an issue if some image is worse with jpegli.


> Jpegli can be encoded with 10+ bits per component.

How are the extra bits encoded?

Is this the JPEG_R/"Ultra HDR" format, or has Google come up with yet another metadata solution? Something else altogether?

Ultra HDR: https://developer.android.com/media/platform/hdr-image-forma...


It's regular old JPEG1. I don't know the details, but it turns out that "8 bit" JPEG actually has enough precision in the format to squeeze out another 2.5 bits, as long as both the encoder and the decoder use high precision math.


Wow, this is the first time I heard about that. I wonder if Lightroom uses high precision math.


This has nothing to do with Ultra HDR. It's "simply" a better JPEG encoder.

Ultra HDR is a standard SDR JPEG + a gain map that allows the construction of an HDR version. Specifically it's an implementation of Adobe's Gain Map specification, with some extra (seemingly pointless) Google bits. Adobe gain Map: https://helpx.adobe.com/camera-raw/using/gain-map.html


Thanks, I was on the team that did Ultra HDR at Google so I was curious if it was being used here. Didn't see anything in the code though so that makes sense.


Something I've been wondering about with the Ultra HDR format, is why did you add the Google GContainer? As far as I can tell, it doesn't do anything that the MPF part doesn't already.


I didn't actually work on the format, just happened to be on the same team. My guess would be that it's related to compatibility with the places the format need to render (android, chrome, photos) or possibly necessary to preserve edit metadata.


Ultra HDR can have two jpegs inside, one for the usual image and another for the gain-map.

Hypothetically, both jpegs can be created with jpegli.

Hypothetically, both Ultra HDR jpegs can be decoded with jpegli.

In theory jpegli would remove the 8 bit striping that would otherwise be present in Ultra HDR.

I am not aware of jpegli-based Ultra HDR implementations.

A personal preference for me would be a single Jpegli JPEG and very fast great local tone mapping (HDR source, tone mapping to SDR). Some industry experts are excited about Ultra HDR, but I consider it is likely too complicated to get right in editing software and automated image processing pipelines.


What is the point of that complexity if JPEG XL can store HDR images ?


The main idea why Ultra HDR is done like that is that the content creator (photographer) can control the local tone mapping. I think.


> High quality results. When images are compressed or decompressed through Jpegli, more precise and psychovisually effective computations are performed and images will look clearer and have fewer observable artifacts.

Does anyone have a link to any example images that illustrate this improvement? I guess the examples would need to be encoded in some other lossless image format so I can reliably view them on my computer.


You can find them in the mucped23.zip file linked here (encoded as PNG): https://github.com/google-research/google-research/tree/mast...


Thanks - I downloaded that zip file (460MB!) and extracted one of the examples into a Gist: https://gist.github.com/simonw/5a8054f18f9ea3c560b628b16b00f...

Here's an original: https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...

And the jpegli-q95- version: https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...

And the same thing with mozjpeg-a95 https://gist.githubusercontent.com/simonw/5a8054f18f9ea3c560...


You shouldn't compare the same quality setting across encoders as it's not standardized. You have to compare based on file size.


They're far too high quality to tell anything. There's no point comparing visually lossless images (inb4 "I am amazing and can easily tell...").


Right? I had all 3 open and quickly flipped over them saw no difference. Maybe I'm just uncultured.


Perhaps try quality settings in the 70 range, and comparable output file sizes. 95 will be high-quality by definition.



What are the file sizes for those two?


The zip file doesn't have the originals, just the PNGs.


Edit: I'm dumb.


I would hope the jpegs compress better than png does.


You said linked but like a fool I went looking for a zip in the repository. This is the link:

https://cloudinary.com/labs/cid22/mucped23.zip (460MB)


I can't blame you, my comment originally didn't have the word "linked", I edited that in after I realized the potential misunderstanding. Maybe you saw it before the edit. My bad.


Ha ha! No worries. I thought it had changed but I frequently skim read and miss things so I wasn't sure.


https://twitter.com/jyzg/status/1622890389068718080

Some earlier results. Perhaps these were with XYB color space, I don't remember ...


As an aside, jpeg is lossless on decode -- once encoded, all decoders will render the same pixels. Since this library produces a valid jpeg file, it should be possible to directly compare the two jpegs.


> all decoders will render the same pixels

Not true. Even just within libjpeg, there are three different IDCT implementations (jidctflt.c, jidctfst.c, jidctint.c) and they produce different pixels (it's a classic speed vs quality trade-off). It's spec-compliant to choose any of those.

A few years ago, in libjpeg-turbo, they changed the smoothing kernel used for decoding (incomplete) progressive JPEGs, from a 3x3 window to 5x5. This meant the decoder produced different pixels, but again, that's still valid:

https://github.com/libjpeg-turbo/libjpeg-turbo/commit/6d91e9...


Moritz, the author of that improvement, implemented the same for jpegli.

I believe the standard does not specify what the intermediate progressive renderings should look like.

I developed that interpolation mechanism originally for Pik, and Moritz was able to formulate it directly in the DCT space so that we don't need to go into pixels for the smoothing to happen, but he computed it using a few of the low frequency DCT coefficients.


> I believe the standard does not specify what the intermediate progressive renderings should look like.

This is possibly getting too academic, but IIUC for a progressive JPEG, e.g. encoded by cjpeg to have 10 0xDA Start Of Scan markers, it's actually legitimate to post-process the file, truncating to fewer scans (but re-appending the 0xD9 End Of Image marker). The shorter file is still a valid JPEG, and so still relevant for discussing whether all decoders will render the same pixels.

I might be wrong about validity, though. It's been a while since I've studied the JPEG spec.


I was not aware of that; I thought that it was pretty deterministic.

Nonetheless, for this particular case, comparing jpegs decoded into lossless formats is unnecessary -- you can simply compare the two jpegs directly based on the default renderer in your browser.


And nowadays, for subsampled images libjpeg post classic version 6 insists on doing the chroma upscaling using DCT where possible, so for classic 4:2:0 subsampled images (i.e. chroma resolution is half the luma resolution both horizontally and vertically) each subsampled 8x8 chroma block is now upscaled individually to 16x16 for the final image, which can and does introduce additional artefacts at the boundaries between each 16x16 px block in the final image. But the current libjpeg maintainer insists on that new algorithm because it is mathematically more beautiful…

Granted, the introduced artefacts aren't massive, but under certain circumstances they are noticeable, which is how I stumbled across that topic in the first place.

Thankfully, most software that isn't still stuck on libjpeg 6 has switched to libjpeg-turbo or some other library instead which continues using a more sensible algorithm for chroma upscaling.


It is approximately correct. The rendering is standards compliant without pixel perfection and most decoders make different compromises and render slightly different pixels.


It would help if the authors explained how exactly they used the Elo rating system to evaluate quality, since this seems like a non-standard and rather unusual use case for this. I am guessing that if an image is rated better than another that counts as a "win"?

Finally, writing "ELO" instead of "Elo" is incorrect (this is just one of my pet peeves but indulge me nevertheless). This is some guy's name not an abbreviation, nor a prog rock band from the 70's! You would not write "ELO's" rating system for the same reason you wouldn't write "DIJKSTRA's" algorithm.



When using Jpegli as a drop-in replacement for for libjpeg-turbo (i.e., with the same input bit-map and quality setting), will the output produced by Jpegli be smaller, more beautiful, or both? Are the space savings the result of the Jpegli encoder being able to generate comparable or better-looking images at lower quality settings? I'd like to understand whether capitalizing on the space efficiency requires any modification the caller code.


The output will be smaller after replacing libjpeg-turbo or mozjpeg with jpegli. You don't need to do any code changes.


I think the main benefit is a better decorrelation transform so the compression is higher at the same quality parameter. So you could choose whether you want better accuracy for the same quality parameter, or lower the quality parameter and get better fidelity than you would have otherwise. Probably to get both most of the time, just use JPEGXL


oh, probably we will get it soon in ImageOptim then https://imageoptim.com/


Thanks in advance!


They'll do literally anything rather than implementing JPEG XL over AVIF in Chrome, huh?

I mean, of course this is still valuable (JPEG-only consumers will probably be around for decades, just like MP3-only players), and I realize Google is a large company, but man, the optics on this...


I have no love for Google, at all.

It's really hard to say this in public, because people are treating it like a divisive "us or them" issue that's obvious, but the JPEG-XL stuff is _weird_.

I've been in codecs for 15 years, and have never seen as unconstructive behavior like the JPEG-XL work. If I had infinite time and money and it came across my plate, we'd have a year or two of constructive work to do, so we didn't just rush something in with obvious issues and opportunities.

It turned into "just figure out how to merge it in, and if you don't like it, that's malfeasance!" Bread and circus for commentators, maybe, but, actively preventing even foundational elements of a successful effort.


To be honest, at Google scale, if there's an objectively good new codec with some early signs of excitement and plausible industry traction, and even Apple managed to deploy it to virtually all of their devices (and Apple isn't exactly known as an "open codec forward" company), not integrating it does seem like either malfeasance or gross organizational dysfunction to me.


Completely hypothetical scenario: what if the technical reaction was so hostile they invested in it themselves to fix the issues and make it sustainable?

In light of the recent security incident, I'd see that completely hypothetical situation as more admirable.


Your posts here seem of the “just asking questions” variety—no substance other than being counterculture. Do you have any proof or resemblance of any logical reason to think this?


It's a gentle joke, it happened, that's TFA. (ex. see the other threads re: it started from the JPEG XL repo).

I use asking questions as a way to keep contentious discussions on track without being boorish. And you're right, it can easily be smarmy instead of Socratic without tone, a la the classic internet sarcasm problem.

Gentle note: I only asked one question, and only in the post you replied to.


Hm, are you suggesting they're currently in the process of reimplementing it in a safer and/or more maintainable way as part of Chrome?

In that case, that would just be extremely bad messaging (which I also wouldn't put past Google). Why agitate half of the people on here and in other tech-affine parts of the Internet when they could have just publicly stated that they're working on it and to please have some patience?

Public support by Google, even if it's just in the form of a vague "intent to implement", would be so important for a nascent JPEG successor.


See comment on peer (TL;DR: I agree, and that's the substance of the post we're commenting on)


Whatever are you referring to? JPEG XL had already been merged into Chromium, prior to being removed again (without a proper reason ever given). As far as I know, the JPEG XL developers have offered to do whatever work was necessary for Chromium specifically, but were never taken up on the offer.

Same thing with Firefox, which has had basic support merged into Nightly, and a couple more patches gathering dust due to lack of involvement from the side of Firefox. Mozilla has since decided to take a neutral stance on JPEG XL, seemingly without doing any kind of proper evaluation. Many other programs (like GIMP, Krita, Safari, Affinity, darktable) already support JPEG XL.

People are not getting upset because projects don’t invest their resources into supporting JPEG XL. People are getting upset because Google (most notably), which has a decisive say in format interoperability, is flat out refusing to give JPEG XL a fair consideration. If they came up with a list of fair conditions JPEG XL has to meet to earn their support, people could work towards that goal, and if JPEG XL failed to meet them, people would easily come to terms with it. Instead, Google has chosen to apply double standards, present vague requirements, and refuse to elaborate. If anyone is ‘preventing even foundational elements of a successful effort’, it’s Google, or more specifically, the part that’s responsible for Chromium.


It also doesn't help that the Chrome team doesn't seem to apply the sam standard to other formats - webp, avif, brotli etc. were all rushed through while providing a much more questionable benefit and having only very limited support outside the web.


I don't think that is accurate. Brotli was added first to Firefox. It took 9 months longer for Chrome. Brotli has many other uses than content encoding in the web.


has had basic support merged

I read the parent post as saying that this is the problem, i.e. that "complete" support is a mess, because AFAIK even the reference implementation is incomplete and buggy, and that then getting angry at the consumers of it is besides the point and won't lead anywhere (which is what we see in practice).

Browsers supporting a format "a little" is almost worse than not supporting it at all, because it makes the compatibility and interoperability problems worse.


>"just figure out how to merge it in, and if you don't like it, that's malfeasance!"

It isn't not accepting it or hostile. That is completely not true.

They actively push against JPEG XL, despite all the data, prior to even 1.0 suggest it is or could be better than AVIF in many cases. To the point where they even make up false benchmarks to downplay JPEG XL.

Companies were even willing to paid ( wont name ) and put resources into getting JPEG XL because they see it to be so good. But they still refused.

It is at this point people thought something doggy is going on. And then not only did Google not explain themselves. They were even more hostile.

So why the extra hate? Well partly because this is a company who gave us an over promised WebP and underdelivered.


If you do creative work countless tools just don’t support webp, AVIF or HEIF.

It’s so prominent running into files you can’t open in your tools that I have a right click convert to PNG context menu


They don’t support it because Chromium doesn’t.

Because Chromium doesn’t support it, Electron doesn’t.

Because Electron doesn’t, Teams and other modern web apps and web sites don’t either, etc…

If Google just added JPEG XL support instead then it would be… a supported alternative to JPEG.

You’re saying working in that is a waste of time because… it’s not supported.


There's a lot more to format support than Chromium. There's a pretty strong meme out there on the Internet that webp is evil despite being supported in all browsers for years because there's still a lot of software out there that never added support and people get annoyed when an image fails to open.


I don't think it's evil, but I just don't think it's very good either.

And a graphics format better be damn good (i.e. much, not just a little bit, better than what it's hoping to replace) if it aspires to become widely supported across applications, operating systems, libraries etc.


Format quality has fuck all to do with widespread support. Widespread support comes from either widespread use (forcing companies to implement support) or from enabling new things.


At least now with Jpegli, this will surely be the nail in the coffin for WebP?

The article has 35% compression improvements over JPEG mentioned and that's at least as much as usually thrown around when discussing WebP.


And let's not forget that the advantages of WebP are already often way overstated compared to how things really match up when you don't just really on shitty metrics to pretend that images have the same quality.


I was very careful not to overstate WebP lossless quality. I compressed the originals with zopflipng and pngcrush before stating figures (-26 %).

If I had just used the internet quality images, WebP lossless would have improved size by -42 %.

Yet another 'marketing textbook' way to overstate -42 % is to turn it into 'loading speed': 1/(1-0.42) = 72 % faster.

None of this was made for the lossless and the most conservative estimate was shown. I didn't do any hacking or cherry picking to produce the number and I had several internal verification approaches to be correct.


maybe proprietary software just isnt so good?


Chromium does support WebP and AVIF, yet parent's tools don't.


Maybe because WebP and AVIF are actually not that great image formats. WebP has been critiqued as a very mediocre improvement over JPEG all the way since its introduction.

These formats are in Chromium because of Google politics, not because of their technical merit.


Talking about things that don’t pass through webtech photoshop, after effects, illustrator, Final Cut, davinchi, 3d software, rendering engines etc


Not a fan of any of these video keyframe base image formats but if your tools don't support them by now then you really should get better tools.


Some authors of this are also the authors of JPEG XL.


I saw that. It's the tragedy of Google in a nutshell: Great things are being worked on in some departments, but organizational dysfunction virtually ensures that the majority of them will not end up in users' hands (or at least not for long).


This is work from Google research outside US. You could even call it a different company with the same name. It is Google US who made those AOM / AVIF decisions.


Why no blame on Mozilla for ignoring format as well?


Mozilla only does what Google tells them.


Why bother promoting firefox then? If they have no will of their own, you're screwed anyway. Might use chrome as well


> They'll do literally anything rather than implementing JPEG XL over AVIF in Chrome, huh?

Before making that kind of claim, I would spend some time looking at the names of the folks who contributed heavily to the development of JPEG XL and the names of the folks who wrote jpegli.


By "they" I mean "Google, the organization", not "the authors of this work", who most likely have zero say in decisions concerning Chrome.


Chrome advised and inspired this work in their position about JPEG XL.

Here: https://www.mail-archive.com/blink-dev@chromium.org/msg04351...

"can we optimize existing formats to meet any new use-cases, rather than adding support for an additional format"

It's a yes!

Of course full JPEG XL is quite a bit better still, but this helps old compatible JPEG to support HDR without 8-bit banding artefacts or gainmaps, gives a higher bit depth for other uses where more precision is valuable, and quite a bit better compression, too.


> "can we optimize existing formats to meet any new use-cases, rather than adding support for an additional format"

Only within pretty narrow limits.

Classic JPEG will never be as efficient given its age, in the same way that LAME is doing incredible things for MP3 quality, but any mediocre AAC encoder still blows it out of the water.

This is in addition to the things you've already mentioned (HDR) and other new features (support for lossless coding).

And I'd find their sentiment much easier to believe if Google/Chrome weren't hell-bent on making WebP (or more recently AVIF) a thing themselves! That's two formats essentially nobody outside of Google has ever asked for, yet they're part of Chrome and Android.


Despite the answer being yes, IMO it's pretty clear that the question is disingenuous, otherwise why did they add support for WebP and AVIF? The question applies equally to them.


> It's a yes!

Reminds me of "You Scientists Were So Preoccupied With Whether Or Not You Could, You Didn't Stop To Think If You Should."

The arithmetic coding feature was already painful enough. I'm simply not in need of yet another thing that makes jpeg files more complicated to deal with.

> After weighing the data, we’ve decided to stop Chrome’s

> JPEG XL experiment and remove the code associated with

> the experiment.

> We'll work to publish data in the next couple of weeks.

Did that ever happen?


I don't see any downsides with Jpegli. Your Linux distro admin exchanges the lib for you, you never need to think about it, only get smaller and more beautiful files. If you use commercial software (Apple, Adobe, mobile phone cameras, Microsoft, ...) hopefully they migrate by themselves.

If they don't, literally nothing happens.

I fail to see a major downside. Perhaps open up your thinking on this?

Yes, Chrome published data.


> you never need to think about it, only get smaller and more beautiful files

People said the same thing last time and it took more than 10 years until decoding worked reliably. I'm simply not interested in dealing with another JPEG++.

> Perhaps open up your thinking on this?

Nah, I'm fine. I went JXL-only for anything new I'm publishing, and if people need to switch browsers to see it – so be it.


It's not a new JPEG++. It creates old JPEGs, fully 100% compatible.

(Of course JXL is better still.)


> I went JXL-only for anything new I'm publishing, and if people need to switch browsers to see it – so be it.

This makes your website only viewable on Safari (and by extension Apple devices) only, right?


> These heuristics are much faster than a similar approach originally used in guetzli.

I liked guetzli but it's way too slow to use in production. Glad there is an alternative.


Is that from Google Zürich?


Yes!


When I saw the name, I knew immediately this is Jyrki's work.


I'm waiting for huaraJPEG...


what is that?


a much ruder but just as stereotypically Swiss German thing as the "-li" suffix ;)


I’ve never heard of a Jpegli bread, but Zöpfli and Brötli sure are yummy :)


I thought clarity was more important.

Otherwise it would be called ... Pumpernikkeli.


Feels like I'm quite wrong when I said (and got flagged for saying),

"Gonna cause quite the firestorm, creating something new everyone will be expected to support and maintain, after Google balked at bringing in jpegxl because they would have to support it."

I still really find the messaging here to be awful. There's tons of comments asking how this related to JXL. @JyrkAlakuijala chimes in in https://news.ycombinator.com/item?id=39921484 that yes it uses JXL techniques, but also it's just using that repo because it had infrastructure which was easy to get started with (absolutely cannot argue with that).

I'm not sure what my ask is, but this felt like a really chaotic release. It's unclear how much good from JPEG XL got chopped off. I'm glad for the iteration, this just seemed really chaotic & unexpected, & NIMBY-istic.


I think technically it is it's own github repo, just under libjxl user. I could be mistaken, I'm still learning git and GitHub.


I got this wrong. We actually did the implementation in the libjxl/libjxl repo. There is no physical reuse of the jpeg xl code, however.

It would have been clearer to make it its own repo.


Sorry for perhaps missing it but it states "It provides both a fully interoperable encoder and decoder complying with the original JPEG standard". Does that mean that jpegli-encoded images can be decoded by all jpeg decoders? But it will not have the same quality?


Jpegli encoded images decode just fine with any JPEG decoder, and will still be of great quality. All the tests were done with libjpeg-turbo as the decoder. Using Jpegli for decoding gives you a bit better quality and potentially higher bit depth.


Thanks, sounds great! Not sure if you are part of the research team but a follow up question nevertheless. Learning from JpegXL, what would it take to develop another widely supported image format? Would the research stage already need to be carried out as a multi-corporate effort?


> Not sure if you are part of the research team but a follow up question nevertheless.

I am not.

> what would it take to develop another widely supported image format? Would the research stage already need to be carried out as a multi-corporate effort?

I believe JXL will be very successful sooner or later, it already has a lot more support than many other attempts at new image formats.

But in general, the main way to get fast adoption on the web is to have Chromium's codec team be the main developers.


Multi-corporate effort would likely need to start by first agreeing what is image quality.

Image quality folks are more cautious and tradition-centric than codec devs, so quite an initial effort would be needed to use something as advanced and risky as butteraugli, ssimulacra or XYB. With traditional objective metrics it would be very difficult to make a competing format as they would start with a 10–15 % disadvantage.

So, I think it is not easy and would need substantial investment.


Sort of like the quality vs. speed settings on libx264, I suppose jpegli aims to push the Pareto boundary on both quality and speed without changing the decode spec


Wonder how it compares to guetzli, which is good albeit slow (by Google also!).


I believe guetzli is slightly more robust around quality 94, but jpegli likely better at or equal at lower qualities like below 85. Jpegli is likely about 1000x faster and still good.


That's my experience, yes. Just tested it on a 750kB 1080p image with detailed areas and areas with gradients. Highly unscientific, N=1 results:

- Guetzli at q=84 (the minimum allowed by Guetzli) takes 47s and produces a 403kB image.

- Jpegli at q=84 takes 73ms (MILLIseconds) and produces a mostly-indistinguishable 418kB image. "Mostly" because:

A. it's possible to find areas with subtle color gradients where Guetzli does a better job at keeping it smooth over a large area.

B. "Little specks" show a bit more "typical JPG color-mush artifacting" around the speck with Jpegli than Guetzli, which stays remarkably close to the original

Also, compared to the usual encoder I'm used to (e.g. the one in GIMP, libjpeg maybe?), Jpegli seems to degrade pretty well going into lower qualities (q=80, q=70, q=60). Qualities lower than q=84 are not even allowed by Guetzli (unless you do a custom build).

I'm immediately switching my "smallify jpg" Nautilus script from Guetzli to Jpegli. The dog-slowness of Guetzli used to be tolerable when there was no close contender, but now it feels unjustified in comparison to the instant darn excellent result of Jpegli.


Thank you for such a well-informed report!

With guetzli I added manually overprovisioning for slow smooth gradients. If you have an example where guetzli is better with gradients you could post an issue with a sample image. That would help us to potentially fix it for jpegli, too.


Hey didn't realize I was answering to a contributor!

So, I started creating an issue in the repo, and as I was creating a side-by-side-by-side comparison of A=orig, B=guetzli, C=jpegli ... I realize that wait-a-minute, Jpegli is actually doing a better job at preserving the original image :D

The B/guetzli version is actually too smoothed, obviating a couple gradient anomalies observable in A/orig. Conversely, C/jpegli actually better "preserves" these imperfections, by not smoothening the broader area into a gradient that is "smoother" but loses some detail.

So, not creating an issue :D. If you wish to see the image and do the A/B/C comparison yourself, it is screenshot 14 of videogame [1], direct link [2]. The area where I noticed differences in gradients is the top / top-right area with black arches and blue fog.

Thanks for Jpegli and Guetzli.

[1] https://store.steampowered.com/app/1671480/ABRISS__build_to_...

[2] https://cdn.cloudflare.steamstatic.com/steam/apps/1671480/ss...


Thank you! This is comforting to know!


I'm using quality 98, but I have only a few encode to ship, so I can live with the encode times :)


I was wondering the same thing. We have a system that guetzli's some of our heavily used assets but it takes SOOOOOO long


From the people who bought you WebP CVE-2023-41064/CVE-2023-4863...


No, these are not the WebP people. These are Google's JXL people.


I designed WebP lossless and implemented the first encoder for it. Zoltan who did most of the implementation work for jpegli wrote the first decoder.


Ok well, in that case I'll label you two as the Renaissance men of image formats


Haha. Thank you!


Does anyone have compiled this to WASM? I'm currently using MozJPEG via WASM for a project and would love to test replacing it by Jpegli.


A WASM demo of this would be fantastic, would make it much easier for people to try it out.

Maybe a fork of https://squoosh.app/ ? The code for that is https://github.com/GoogleChromeLabs/squoosh


Just for my own edification, why would there be any trouble compiling portable high-level libraries to target WASM?


Maybe it uses threads.


I'd love to know what "Paradigms of Intelligence" means in this context.


I would've loved to see side-by-side comparison... after all we talk visuals here, right? So as this old saying goes: a hand to touch, an eye to see.

Not underestimating the value in this, but the presentation is very weak.


Is there an easy way for us to install and test it?


In Arch it's packaged in libjxl as /usr/bin/{c,d}jpegli . See https://archlinux.org/packages/extra/x86_64/libjxl/ , section "Package Contents" at the bottom.


I'm currently using it via XL Converter.

https://codepoems.eu/xl-converter/


nixpkgs unstable (i.e. 24.05) has it at ${libjxl}/bin/cjpegli


Why is it written in C++, when Google made Wuffs[1] for this exact purpose?

[1]: https://github.com/google/wuffs


This is pure speculation, but I'm presuming Wuffs is not the easiest language to use during the research phase, but more of a thing you would implement a format in once it has stabilized. And this is freshly published research.

Probably would be a good idea to get a port though, if possible, improving both safety and performance sounds like a win to me.


Yeah, you're right. It's not as easy to write Wuffs code during the research phase, since you don't just have to write the code, you also have to help the compiler prove that the code is safe, and sometimes refactor the code to make that tractable.

Wuffs doesn't support global variables, but when I'm writing my own research phase code, sometimes I like to just tweak some global state (without checking the code in) just to get some experimental data: hey, how do the numbers change if I disable the blahblah phase when the such-and-such condition (best evaluated in some other part of the code) holds?

Also, part of Wuffs' safety story is that Wuffs code cannot make any syscalls at all, which implies that it cannot allocate or free memory, or call printf. Wuffs is a language for writing libraries, not whole programs, and the library caller (not callee) is responsible for e.g. allocating pixel buffers. That also makes it harder to use during the research phase.


Wuffs is for the exact _opposite_ purpose (decoding). It can do simple encoding once you know what bits to put in the file, but a JPEG encoder contains a lot of nontrivial machinery that does not fit well into Wuffs.

(I work at Google, but have nothing to do with Jpegli or Wuffs)


Encoding is definitely in Wuffs' long term objectives (it's issue #2 and literally in its doc/roadmap.md file). It's just that decoding has been a higher priority. It's also a simpler problem. There's often only one valid decoding for any given input.

Decoding takes a compressed image file as input, which have complicated formats. Roughly speaking, encoding just takes a width x height x 4 pixel buffer, with very regular structure. It's much easier to hide something malicious in a complicated format.

Higher priority means that, when deciding whether to work on a Wuffs PNG encoder or a Wuffs JPEG decoder next, when neither existed at the time, I chose to have more decoders.

(I work at Google, and am the Wuffs author, but have nothing to do with Jpegli. Google is indeed a big company.)


Thanks for the answer!


The first paragraph in Wuffs's README explicitly states that it's good for both encoding and decoding?


No, it states that wrangling _can be_ encoding. It does not in any way state that Wuffs is actually _good_ for it at the current stage, and I do not know of any nontrivial encoder built with Wuffs, ever. (In contrast, there are 18 example decoders included with Wuffs. I assume you don't count the checksum functions as encoding.)


Google is a big company.


this is a valid question why is it being downvoted?


Maybe because the hailing for yet another "safe" language starts to feel kinda repetitive?

Java, C#, Go, Rust, Python, modern C++ with smartpointer,...

I mean a concepts for handling files in a safe way are an awesome (and really needed) thing, but inventing a whole new programming language around a single task (even if it's just a transpiler to c)?


One of the advantages with wuffs is that it compiles to C and wuffs-the-library is distributed as C code that is easy to integrate with an existing C or C++ project without having to incorporate new toolchains.


> Maybe because the hailing for yet another "safe" language starts to feel kinda repetitive?

Ah yeah, because the endless stream of exploits and “new CVE allows for zero-click RCE, please update ASAP” doesn't feel repetitive?

> I mean a concepts for handling files in a safe way are an awesome (and really needed) thing, but inventing a whole new programming language around a single task (even if it's just a transpiler to c)?

It's a “single task” in the same way “writing compilers” is a single task. And like we're happy that LLVM IR exists, having a language dedicated to writing codecs (of which there are dozens) is a worthwhile goal, especially since they are both security critical and have stringent performance needs for which existing languages (be it managed languages or Rust) aren't good enough.


> 10+ bits. Jpegli can be encoded with 10+ bits per component. Traditional JPEG coding solutions offer only 8 bit per component dynamics causing visible banding artifacts in slow gradients. Jpegli's 10+ bits coding happens in the original 8-bit formalism and the resulting images are fully interoperable with 8-bit viewers. 10+ bit dynamics are available as an API extension and application code changes are needed to benefit from it.

So, instead of supporting JPEG XL, this is the nonsense they come up with? Lock-in over a JPEG overlay?


This is from the same team as JPEG XL, and there is no lock-in or overlay. It’s just exploiting the existing mechanics better by not doing unnecessary truncation. The new APIs are only required because the existing APIs receive and return 8-bit buffers.


Can we just get rid of lossy image compression, please? It's so unpleasant looking at pictures on social media and watching them degrade over time as they are constantly reposted. What will these pictures look like a century from now?


Please, keep lossy compression. Web is unusable already with websites too big as it is.

What should happen: websites/applications shouldn't recompress images if they already deliver good pixel bitrate. Websites/applicates shouldn't recompress images just to add own watermarks.


The problem here is not lossy image compression, but lossy re-compression.


It's not the social network's job to preserve image quality. That would be mixing up the concerns.


It's not the factory's job to preserve air quality. That would be mixing up concerns.

It's not the fisherman's job to preserve ecosystem quality. That would be mixing up concerns.


Awesome google. Disable zooming on mobile so I can't see the graph detail.

You guys should up your web game


Many sites do this out of a misguided notion of what web development is.

FWIW Firefox mobile lets you override zooming on a site.


> FWIW Firefox mobile lets you override zooming on a site.

What in heavens name and why is this not a default option? Thankee Sai!


Chrome also has an option for this and it's great


Having just searched a little for this, to turn this on you need about:config.

Apparently about:config is still not available on Firefox Mobile main release it seems.

Supposedly available on Dev/Beta/Nightly or similar - unverified statement though.

Annoying.


There is no need to go to about:config.

Firefox 3 dot menu -> Settings -> Accessibility -> Zoom on all websites


Huh. Awesome. Please accept my most humble "eat my words" thank you.


Why is this not just unconditionally enabled?


> 10+ bits. Jpegli can be encoded with 10+ bits per component.

If you are making a new image/video codec in 2024 please don't just give us 2 measly extra bits of DR. Support up to 16 bit unsigned integer and floating point options. Sheesh.


We insert/extract about 2.5 bits more info from the 8 bit jpegs, leading to about 10.5 bits of precision. There is quite some handwaving necessary here. Basically it comes down to coefficient distributions where the distributions have very high probabilities around zeros. Luckily, this is the case for all smooth noiseless gradients where banding could be otherwise observed.


Does the decoder have to be aware of it to properly display such an image?


To display it at all, no. To display it smoothly, yes.


From a purely theoretical viewpoint 10+ bits encoding will lead into slightly better results even if rendered using a traditional 8 bit decoder. One source of error has been removed from the pipeline.


Ideally, the decoder should be dithering, I suppose. (I know of zero JPEG decoders that do this in practice.)


Jpegli, of course, does this when you ask for 8 bit output.


Has there been any outreach to get a new HDR decoder for the extra bits into any software?

I might be wrong, but it seems like Apple is the primary game in town for supporting HDR. How do you intend to persuade Apple to upgrade their JPG decoder to support Jpegli?

p.s. keep up the great work!


I tried to reach to their devrel person Jen Simmons here: https://twitter.com/jyzg/status/1763141558042243470

I didn't follow up and I don't know if she read it or understood the proposal.


How does the data get encoded into 10.5 bits but displayable correctly by an 8 bit decoder while also potentially displaying even more accurately by a 10 bit decoder?


Through non-standard API extensions you can provide a 16 bit data buffer to jpegli.

The data is carefully encoded in the dct-coefficients. They are 12 bits so in some situations you can get even 12 bit precision. Quantization errors however sum up and worst case is about 7 bits. Luckily it occurs only in the most noisy environments and in smooth slopes we can get 10.5 bits or so.


8-bit JPEG actually uses 12-bit DCT coefficients, and traditional JPEG coders have lots of errors due to rounding to 8 bits quite often, while Jpegli always uses floating point internally.


It's not a new codec, it's a new encoder/decoder for JPEG.


I consider codec to mean a pair of encoder and decoder programs.

I don't consider it to necessarily mean a new data format.

One data format can be implemented by multiple codecs.

Semantics and nomenclature within our field is likely underdeveloped and the use of these terms varies.


This should have been in a H1 tag at the top of the page. Had to dig into a paragraph to find out Google wasn’t about to launch another image format supported in only a scattering of apps yet served as Image Search results.


It is. (well, h3 actually)

> Introducing Jpegli: A New JPEG Coding Library


From their Github:

> Support for 16-bit unsigned and 32-bit floating point input buffers.

"10+" means 10 bits or more.


Would not ">10" be a better way to denote that?


That means something different, but "≥10" would be better IMHO. Really there's an upper limit of 12, and 10.5 is more likely in practice: https://news.ycombinator.com/item?id=39922511


I decided to call it 10.5 bits based on rather fuzzy theoretical analysis and a small amount of practical experiments with using jpegli in HDR use where more bits is good to have. My thinking is that in the slowest smoothest gradients (where banding would otherwise be visible) it is only three quantization decisions that generate error: (0,0), (0,1) and (1, 0) coefficient. Others are close to zero. I consider these as adding stochastic variables that have uniform error. On the average they start to behave a bit like a Gaussian distribution, but each block samples those distributions 64 times so there are going to be some more and some less lucky pixels. If we consider that every block would have one maximally unlucky corner pixel which would get all three wrong.

log(4096/3)/log(2) = 10.41

So, very handwavy analysis.

Experimentally it seems to roughly hold.


Yeah, >=, my bad.


For pure viewing of non-HDR content 10 bits is good enough. Very few humans can tell the difference between adjacent shades among 1024 shades. Gradients look smooth.

16 bits is useful for image capture and manipulation. But then you should just use RAW/DNG.


I wonder if Google will make a JPEG library with https://github.com/google/wuffs at some point.


Google has a direct conflict of interest: the AVIF team.


Did you mean other than the jpeg decode that is already in wuffs?


Google sure did a shitty job of explaining the whole situation.

JpegXL was kicked out. This thing is added, but the repo and code seems to be from jxl.

I'm very confused.


It was just easiest to develop in libjxl repo. All test workers etc. are already setup there. This was done by a very small team...


[flagged]


Are you asking this just because you saw the word "compression" from google, or is there a better connection I'm missing.


It's April 3rd, confirmed


[flagged]


First, this is work by the team that co-designed JPEG XL.

Second, as far as I can tell this is a JPEG encoding library, producing JPEG files that any existing JPEG decoder will be able to read. Nobody is being asked to support or maintain anything new.


Not to mention that it's 35% more efficient than existing encoders, and can support 10+ bits per component encoding while remaining compatible with existing decoders. That's pretty amazing.


It would still be nice to see a comparison to JPEG XL.


https://giannirosato.com/blog/post/jpegli/

Note that that's from almost a year ago, don't know if anything changed.


Yeah, this is just that. They took JPEG XL denoising and deartifacting algorithms and front loaded them into a 10-bit JPEG encoder.

There's more to it, of course, but it's essentially just an improved encoder versus a new format.


No denoising or deartifacting was used. They are an option for future improvements at low quality JPEG decoding.


The sourcecode is on jxl's main repo. I'm confused to say the least


What's so confusing about it?

This team did a lot of work to create JPEG XL. For whatever reason, Chrome is not agreeing to ship it, which will make it a lot harder for it to be widely adopted. So they're now applying the same techniques to classic JPEG, where the work they did can provide value immediately and not be subject to a pocket veto by Chrome.


The confusing part is that the code is in the jxl repository. Is it Google's code, JXL's, JXL's people under Google?


All the developers involved are part of the JXL team at Google Zürich, as far as I can tell.


Very interesting that new projects like this still use C++, not something like Rust.


When your aim is maximum adoption and compatibility with existing C++ software, C++ or C are the best choice. When you're building on an existing C++ codebase, switching language and doing a complete rewrite is very rarely sensible.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: