Hacker News new | past | comments | ask | show | jobs | submit login
Lid – Lo-fi image dithering (rawtext.club)
118 points by ecliptik on Jan 12, 2023 | hide | past | favorite | 35 comments

As has been pointed out by others before, while dithering is stylistic and useful for display in low color resolution screens, dithered images compress poorly. You'll almost always get more detail and color out of JPEG. At 30% quality I get a 43 KiB file, and the difference is much less noticeable than if I had tried to quantize down to two colors.

Now, if the point is aesthetics and not efficiency, by all means use dithering.

In a sense, JPEG is fancy dithering. The DCT gives you a dithering pattern customized for exactly the image you're compressing, every dot placed to minimize error. (With some hand waving.)

> dithered images compress poorly

If you think about error diffusion dithering in particular on a 1-bit palette - Information theory dictates that you are kind of pushing entropy limits as-is. Compression is still possible, but to get any kind of realistic gain it's going to have to be slow compression.

Piping stuff like 1-bit dithered graphics through fast, lossy JPEG/MPEG-style compressors (i.e. quantized DCT) will yield absolute garbage results.

>> dithered images compres poorly

> Piping stuff like 1-bit dithered graphics through fast, lossy JPEG/MPEG-style compressors (i.e. quantized DCT) will yield absolute garbage results.

Of course dithered images compress poorly when using compression algorithms that aren’t designed for them, but I don’t see why, a priori, they should when the choice of compressor is free.

That current compressors for photos are better than those for dithered graphics may only be because there has been way more research in compression of photos than in that of dithered pictures.

You could, for example, try to compress a dithered image not by running a JPEG compressor designed for photographic images, but by searching for (search algorithm TBD) a low quality JPEG file that, when dithered, produces an image that is perceptually similar to the original dithered image.

For photos, I would expect that JPEG file to look better than its dithered version, but if the input is a dithered photo, I’m not sure. Even if it is, such an approach might still be useful. And JPEG likely isn’t the optimal compressed format.

But dithered images is a niche market, so I don’t expect we’ll ever know how good these images can be compressed. Even the demo scene probably won’t work on this. They wouldn’t build a generic compressor, just one that compresses the images in their current demo well.

Could you please elaborate on the link of error diffusion to information theory? as far as I understand error diffusion minimizes the quantization error for lower frequencies on the expense of adding more noise to higher frequencies. i.e it seems to be only optimizing for human perception

You're right - in pure information theoretic terms there is nothing special happening here. It's a tradeoff like always. But, in human perceptual terms (i.e. how JPEG/MPEG are designed), you may find substantial gain in useful information per bit by applying dithering.

The useful amount of information represented by any given bit of data is much larger in this arrangement. 1 bit = 1 entire pixel. In other schemes, you have upwards of 24 bits representing the contents of 1 pixel. To human eyes, only 8 of these bits really matter. You can usually throw away 50% of the other 16 without anyone noticing.

...and when you compare the size of JPEG with AVIF[1], dithered PNG may seem even more wasteful and pointless.

[1] https://afontenot.github.io/image-formats-comparison/#abando...

FYI, the images on the website are shrunk by a small factor which messes up the dithering. They're best viewed in a new tab, or with the website zoomed to 125% on a 96dpi setting.

Yet another of many reasons in this thread for why dithering photographs for the web is stupid.

Or, perhaps one should use some CSS magic to make things appear correctly.

Yes, the scaling induces visual artifacts that pretty substantially misrepresent what each algorithm should look like. (In particular it adds vertical and horizontal bars that aren't on the actual image.) Definitely make sure you view the images at the proper zoom level.

You'd think they'd have foreseen that and just cropped their examples to be pixel perfect for their own fixed width content column.

That doesn't work either, since browsers don't respect exact pixel sizes for elements even at 100% and at other zoom settings all bets are off.

That is why I created this project[0], recently discussed here[1]

[0] https://sheep.horse/2022/12/pixel_accurate_atkinson_ditherin...

[1] https://news.ycombinator.com/item?id=34052253

> even at 100%

Can you give more background or references for this? It hasn’t been my experience, at least on desktop.

The pixel (as CSS defines it) does not correspond to an individual pixel on-screen. The is (literally) doubly true for high-DPI screen where a single CSS pixel might contain 4 or even 9 real pixels on the display.

The browser is very good at scaling images up so that they look good but at some point it will have to interpolate from image pixels to screen pixels. This is fine for almost all images but plays havoc with images that have already gone through a dithering process.

For color images it doesn't matter so much but for my project I wanted the really harsh all-black-and-white pixels of old.

I’m aware of what you’re describing, but at 100% zoom (and 100% DPI) my experience is that you do get a 1:1 pixel mapping (which I very much care about, and so far haven’t had trouble achieving, as a user). The above quote gave the impression that this isn’t always true, and I was asking about that.

It might work for you as a user if you set your monitor up right but you absolutely cannot put up a web page and expect the 1:1 pixel mapping to be preserved for everyone who visits your site.

Between my phone, tablet, and two laptops, I don't even have a standard DPI display device anymore, and I hardly run bleeding-edge hardware. Everything is going to be scaled.

On top of what everyone else has said, someone might simply be using a non-native resolution on their monitor.

(Does MacOS still default to non-integer scaling on MacBooks?)

Doing the lord's work, thank you.

Version two is on the way...

This is interesting if you are after retro effects. State of the art dithering however works with blue noise threshold textures[1] because we have plenty of memory available these days. Here is a nice comparison of all kinds of dithering algorithms that also shows the clear superiority of blue noise dithering: [2].

1.: http://momentsingraphics.de/BlueNoise.html

2.: https://surma.dev/things/ditherpunk/

A good forum post on the challenges of the "obra dinn" dither effect in 3d.


Didder is fantastic. I'm using it to convert images to grayscale for use with Waveshare e-ink displays. I've found no other tool that both lets you specify a custom palette (without which the resulting image looks pretty bad) and is reasonably quick.

For future reference, this works beautifully with their 9.7" display: didder --palette '000000 111111 222222 333333 444444 555555 666666 777777 888888 999999 aaaaaa bbbbbb cccccc dddddd eeeeee ffffff' -g -i - -o - edm --serpentine FloydSteinberg

Image Alchemy [1] Unfortunately not longer available. But one of the greatest image conversion software I know (including most of the here described algorithm for dithering and some more). And this feature set is from at least 25-30 years ago.

I'm always getting a bit nostalgia when I see these dithered images and remember the time then with the fascination for the opposite direction: 'true color... wow that would be great' ;)

[1] https://www.handmadesw.com/products/image_alchemy.htm

Recently: Pixel Accurate Atkinson Dithering for Images in HTML (32 comments) https://news.ycombinator.com/item?id=34052253

Error diffusion is by far my favorite. Works while zooming out also.

It looks amazing. Honestly I’d rather see that in most books than colored pics.

For anyone who wants to play with dithering in mac/iOS apps, these have efficient implementations in Accelerate in the planar8toplanarN conversions, eg: https://developer.apple.com/documentation/accelerate/1533024...

Error diffusion dithering should be done in linear color space to avoid the severe increase in perceived lightness of the dithered image.

See https://surma.dev/things/ditherpunk/ under "Gamma" for more info, and some example gradients that clearly demonstrate the effect.

Related ongoing thread:

Grayscale on 1-bit LCDs (2022) - https://news.ycombinator.com/item?id=34354213 - Jan 2023 (73 comments)

Next: feed the dithered picture into Conway's game of life, then create animation of game played in reverse starting from final generation.

If you want to do dithering there are lots more variations(algorithms) on it with very very impressive results in the resulting quality.

We need a TAOCP volume on dithering.

Beautiful! One suggestion: it would be helpful to put the original image first, so that people have context before seeing it dithered.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact