Hacker News new | past | comments | ask | show | jobs | submit login

> unbelievably better

I don't know that this qualifies as unbelievable. This is just good marketing and spin. The image in that link is exploiting the fact that modern codecs specify upsampling filters, so the HEVC half looks smoothly varying while JPEG can, per spec, only look pixelated when blown up like that.

There's absolutely no reason that thing couldn't have shown you a jpeg image rendered with bilinear filtering, which of course is what you'll see if you put that JPEG on a texture and scale it up using the GPU.

But it didn't, because it wanted to convince you how much "unbelievably" better HEVC is. Meh.

I mean, to be clear: HEVC is absolutely better, to the tune of almost a factor of two in byte size for the same subjective quality. Just not like this. If you've got a site where still images are a significant fraction of your bandwidth budget (and your bandwidth budget is a significant fraction of your budget budget) then this could help you. In practice... static content bandwidth is mostly a solved problem and no one cares, which is why we aren't using BPG.






> There’s absolutely no reason that thing couldn’t have shown you a jpeg image rendered with bilinear filtering, which of course is what you’ll see if you put that JPEG on a texture and scale it up using the GPU.

That’s not true, this particular JPEG will not look smooth on a GPU. The visible blocks are 8x8 pixels, not 1 pixel. The blocks would still be blocky because the compression has reduced them to only the DC component. This means the JPEG decode would still decode large blocks into texture memory and the GPU would still render large blocks with a slight blur between the 1 pixel borders of the 8 pixel blocks.

> In practice... static content bandwidth is a mostly solved problem and no one cares, which is why we aren’t using BPG.

I don’t believe that’s true either. I don’t know what you mean about bandwidth being “solved”, but more data is more data. If everyone could reduce their static images today by 2x without thinking, I’m pretty sure they would. I would on my sites if I could. The reason that BPG (or any other format!) isn’t yet being used is lack of browser support and tools and licensing and consensus, not because nobody cares about bandwidth. If what you said is true, then JPEG & PNG would never be replaced, but it’s already starting to happen.


The point was that, to first approximation, the internet is YouTube and Netflix. Static images are noise.

And I'm sorry, but if that JPEG is showing artifacts like that it was simply miscompressed, likely deliberately. I repeat, there is absolutely no reason an image of there same byte count cannot look just fine on that screen. For goodness sake, just reduce the resolution if nothing else.


The JPEG was compressed to death deliberately, yes. That's the point of the comparison; for the same (tiny) data size BPG gives you a clearly better result. BPG is workable in situations where JPEG isn't.

> For goodness sake, just reduce the resolution if nothing else.

Reducing the resolution would not work and is not the same thing. You would still get 8x8 DC blocks in a smaller JPEG at the same compression rate, and higher detail in the BPG. The BPG is provably better than the JPEG, and since you already know that and talked about it above, I'm a little confused by your argument.


I think there often is a point when reducing the bitrate further while keeping the same resolution is a worse choice than dropping the resolution. I believe one of the tricks of AV1 is to internalise this trade-off.

More generally, doing visual comparisons of codecs is full of traps. Since quality often falls off rapidly at a certain point it's possible to show your codec off by choosing the exact point to compare to maximise the difference. This is almost certainly the case if you're comparing against jpeg and it's starting to totally fall apart.

It's not that it doesn't correctly show that one is worse than the other but it's probably not a good visual reflection of the true scale of the difference any more than ramping up the bitrate until everyone would think they are indistinguishable.


Sure, yes, but the comparison in question isn't the only comparison, it's just one of many. Letting people see the difference visually for themselves is more tangible and less questionable that putting up two file size numbers and claiming they're equal quality. And if you think about it, it similarly wouldn't completely be fair to BPG to turn up the quality so that the JPEG looked reasonably good, because the BPG file would be in the range of diminishing returns per byte.

It is fair to point out that JPEG falls apart below a certain compression threshold while BPG doesn't, even if it's a little bit apples to oranges. I prefer the way @enriquito said it https://news.ycombinator.com/item?id=20419900 but I disagree with the hyperbolic claims in this thread that this comparison is outrageous or complete spin or cheating. It's a data point that has some actual merit, and BTW happens to agree with the other data points we have.


That picture at those dimensions and at that file size will show those artifacts. That's what the demonstration page is trying to show.

But that's completely artificial (and in the circumstances, outrageously spun!), because no one sane would compress a JPEG to a level where the macroblocks become flat blocks. You compress it at a resolution that captures the relevant detail, then display it with a filter that upsamples linearly. Nowhere on the internet is there an image that actually looks like this, because the "problem" being "solved" here doesn't exist in practice.

I mean, it's true that IF you do that, that h.265 will save you by having control over filtering. But that's not a fair comparison, at all.


I think you are missing the point of the comparison. Nobody says that JPEG images like this are common.

The point is rather that you could make a passable image of this bitrate using the better codec. It opens up a new level of compression not possible with JPEG.

If you downsampled the JPEG enough to get the same bitrate without blocking artifacts, it would look like a blurry smear, which is better than a bunch of 8x8 flat blocks but not that much better.

Maybe it would be worth including an additional JPEG downsampled-compressed-decompressed-upsampled view. You’d be able to see that jumping through those hoops doesn’t really give satisfactory results.


> The point is rather that you could make a passable image of this bitrate using the better codec. It opens up a new level of compression not possible with JPEG.

This is simply not correct. Anyone can get a passable image with that bit size. You're trying to stipulate "image pixel size" as some kind of inviolate thing, when it's not in reality. In reality, web content authors don't do insane things with their compression settings (I mean seriously: how would you even produce this? Standard conversion tools, for obvious reasons, will never emit an image like this). This isn't a real argument. This is cheating.

> it would look like a blurry smear,

Assuming facts not in evidence. The image in h.265 is plenty smeary already, FWIW. I think we both know that if this were true, the spin in the link would have done that instead of cheating with macroblock resolution. But in fact the blurry smear looks just fine, so they had to cheat.


Here’s what this image looks like if I rescale the original to half this size, save it using Adobe Photoshop’s JPEG encoder at ~26 kb size, then enlarge it back to double size:

https://i.imgur.com/t8GWPxX.jpg

It looks like garbage. It would still look like garbage if you resized it to some different size and used the appropriate level of JPEG compression to get a 26 kb image. JPEG just isn’t good enough to make this image workable for 1300x900 pixel output using only 26 kb.


> It looks like garbage

Sigh. Not like the garbage in the link above it doesn't. If this was the image in the link we all would have nodded and said "yup, looks about worth a doubling of byte count" and we'd all be in agreement.

Instead, the absurdist misencoded nonsense above gets thrown around as proof that BPG is some kind of revolutionary technology, and a bunch of us have to step in to debunk the silliness.

It's twice as efficient (roughly). Just say that and leave the spin.


> I don't know that this qualifies as unbelievable. This is just good marketing and spin. The image in that link is exploiting the fact that modern codecs specify upsampling filters, so the HEVC half looks smoothly varying while JPEG can, per spec, only look pixelated when blown up like that.

I think you're focusing on the background? I agree that they should have masked the background out, but if you look instead at the first yellow segment of the rear wing near the body, there is clearly more available detail in the HEVC side.


A few times I have looked at the possibility of switching from JPEG to another format for photo web sites and every time I have I've come to the conclusion that you can't really win.

There are three benefits that one could get from reducing the file size:

1. Reduced storage cost

2. Reduced bandwidth cost

3. Better user experience

In my models, storage cost matters a lot. You can't come out ahead here, however, if you still have to keep JPEG copies of all the images.

Benefits in terms of 2 are real.

Benefits in terms of 3 are hard to realize. Part of it is that adding any more parts to the system will cause problems for somebody somewhere. For instance, you can decompress a new image format with a Javascript polyfill, but is download+decompress really going to be faster for all users?

Another problem is that much of the source material is already overcompressed JPEG so simply recompressing it with another format doesn't lead to a really improved experience. When I've done my own trials, and when I've looked closely at other people's trials, I don't see a revolutionary improvement.

A scenario that I am interested in now is making desktop backgrounds from (often overcompressed) photos I find on the web. In these cases, JPEG artifacts look like hell when images are blown up, particularly when images have the sharp-cornered bokeh that you get when people take pictures with the kit lens. In that case I can accept a slow and expensive process to blow the image up and make a PNG, something like

https://www.mathworks.com/help/images/jpeg-image-deblocking-...

or

https://github.com/nagadomi/waifu2x

The other approach I imagine is some kind of maximum entropy approach that minimizes the blocking artifacts.


Fine detail is still better preserved where JPEG blows it away on low Q settings. A filter can't reconstruct that information.

Probably, but that effect is subtle and not the "OMG THIS IS PIXELATED" nonsense the linked page is showing.

I'm not saying BPG isn't better, I'm saying that link is ridiculous and doesn't capture the much more modest gains that it actually offers.


Maybe using https://github.com/google/knusperli as the JPEG decoder would give a fairer comparison?

How come all of google's size-related software ends in 'li'? Zopfli, brotli, knusperli...

They are developed in switzerland



Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: