
Image Dithering: Eleven Algorithms and Source Code - Ivoah
http://www.tannerhelland.com/4660/dithering-eleven-algorithms-source-code/
======
mark-r
The article doesn't put enough emphasis on the need for gamma correction. If
you dither a pure gray area of level 128, it will become approximately 50%
white and 50% black. But when you display that back on the screen, because of
gamma effects it will look closer to a level of 186!

A few years ago there was a need for 15-bpp or 16-bpp color images on phones
rather than the 24-bpp images we usually work with, and dithering was a great
way of producing them. No idea how much need there is today though.

~~~
microcolonel
24bpp is not enough for shallow gradients which don't come from a photographic
source (especially in dark images). I frequently dither 48bpp material for
24bpp displays.

I typically use Sierra Lite.

I've recently been experimenting with error diffusion along continuous space-
filling curves (I forget where I saw this technique first), and using color
difference functions (CIEDE2000) to gauge quantization error. These are
overkill, but it sure beats doing it the same boring way forever.

Frankly, I would like it if the CRTCs attached to modern GPUs had some ability
to apply dither (even just ordered dither) to buffers as they're scanned out,
while exposing more precision to applications.

~~~
jacobolus
Unless you are integrating many pixels (how many and with what weights depends
on the future processing, intended output, and viewing conditions for the
image) I would not expect using a fancy color difference formula for comparing
individual pixels to give a particularly useful result. Indeed, the whole
point of dithering (especially with high resolution output) is that we can
adjust the average light coming from a several-pixel-wide blob by brightening
or darkening individual pixels, and thereby make finer distinctions in light
level, since the individual pixels are not really noticed. I’m also pretty
skeptical about the space-filling curve thing. Those both sound like placebo
features to me.

~~~
microcolonel
Well, I did say _experimenting_ , didn't I?

I found the thing which inspired my space filling curve exploration:
[https://www.compuphase.com/riemer.htm](https://www.compuphase.com/riemer.htm)

The Butteraugli score between the standard lena and the Riemersma dither is
about 0.5 better than the standard Floyd-Steinberg one.

Approaches like this come closer to back-propagation of errors, the lack of
which is where some of the artifacts of unidirectional dither come from. In
practice, any decent (full bleed) error diffusion dither works perfectly fine
going between 16 and 8 bit planes, but why shouldn't I try some things out
which (as you are quick to point out) may have very little value.

Heck, even gamma correction isn't strictly necessary when you're going from 16
to 8 bit sRGB.

~~~
mark-r
Yes, once you start working on a very small segment of the curve it's close
enough to linear and gamma correction becomes overkill.

~~~
jacobolus
I could see it making a difference in average color of very noisy regions,
especially brightly colored ones. I suspect you could deliberately construct
an image where the difference could be seen by human observers.

------
raphlinus
There's a really good technique for getting rid of the wormy or snake-like
textures in Floyd-Steinberg: add another term to the threshold proportional to
(some monotonic function of) the distance to the nearest preceding dot. This
tends to make the dots very nicely spaced. These ideas were, among other
places, used in Gutenprint.

A bit of description, and ancient but working GPL'ed code here:
[http://www.levien.com/artofcode/eventone/](http://www.levien.com/artofcode/eventone/)

A paper containing the basic output-dependent feedback idea:
[http://levien.com/output_dependent_feedback.pdf](http://levien.com/output_dependent_feedback.pdf)

------
dreamcompiler
Dithering trades sample resolution for sample frequency to convey the same
information. For images, the underlying sample frequency usually doesn't
change but the _apparent_ spatial sample frequency becomes lower in order to
achieve more than one effective bit per (coarser) sample.

For one-dimensional signals like audio, the underlying sample frequency is
usually increased while decreasing the sample resolution (often to 1 bit per
sample). This keeps the effective Nyquist frequency where it needs to be while
pushing noise much higher in frequency where it's very easy to remove with an
analog filter. Delta-sigma modulation is perhaps the most common method of
audio dithering (although the delta-sigma literature rarely uses the D word).

The reason images and audio usually use dithering in opposite ways is that
images are usually post-processed for lower true sample resolution (and higher
effective sample resolution) _after_ sampling, while audio is often sampled
_initially_ at much higher than the Nyquist frequency because dithering is a
planned part of the audio processing chain. But not always! Those are merely
common use cases.

~~~
starmole
I'm not sure I follow. To me dithering comes from quantization. I don't think
you should think of dithering in frequency space.

[https://bjango.com/images/articles/gradients/dithering-
extre...](https://bjango.com/images/articles/gradients/dithering-extreme.png)

Illustrates the same signal dithered or not. No sampling rate involved.

And please correct me if i'm wrong, but you can not trade sample accuracy for
sampling rate.

~~~
gugagore
You can trade sample accuracy for sample rate. This is how I directly
interpret ordered dithering. The following line on the wikipedia article
communicates (maybe) this:

"The size of the map selected should be equal to or larger than the ratio of
source colors to target colors. For example, when quantizing a 24bpp image to
15bpp (256 colors per channel to 32 colors per channel), the smallest map one
would choose would be 4x2, for the ratio of 8 (256:32). This allows expressing
each distinct tone of the input with different dithering patterns."

Same idea applies to 1D signals (from [https://www.wikiwand.com/en/Delta-
sigma_modulation](https://www.wikiwand.com/en/Delta-sigma_modulation) : " used
to convert high bit-count, low-frequency digital signals into lower bit-count,
higher-frequency digital signals")

same idea: [https://www.wikiwand.com/en/Class-
D_amplifier](https://www.wikiwand.com/en/Class-D_amplifier)
[https://www.wikiwand.com/en/Pulse-
density_modulation](https://www.wikiwand.com/en/Pulse-density_modulation)

------
leni536
Also a paper about halftoning on laser printers:

[http://users.eecs.northwestern.edu/~pappas/papers/pappas_ist...](http://users.eecs.northwestern.edu/~pappas/papers/pappas_ist94.pdf)

I have never used a laser printer that did implement anything like this, the
usual halftoning sucks. Once I handcrafted a file with a 600 dpi Floyd-
Steinberg image (the native resolution of the printer I hadd) and it resulted
in much better results, I didn't bother calibrating the gray levels though.

~~~
jgneff
I came to the same conclusion just this week. I got the best print simply by
matching the density of my black and white laser printer with the black dots I
wanted it to print.

Specifically, I used the ImageMagick operations "-dither FloydSteinberg
-monochrome" for more contrast or "-dither FloydSteinberg -remap
pattern:gray50" for more fidelity to the original image.

I was surprised that the prints were better without adjusting for gamma. If I
converted the image to a linear color space before the dither operation, the
print came out too dark. I'm guessing that the gamma in the non-linear color
space compensated for the dot gain on the printer to cancel out the effect.

~~~
leni536
> I was surprised that the prints were better without adjusting for gamma.

Ha, it was the same for me. I used imageworsener where I had to explicitly
specify -nogamma for this effect.

An other place where wrong gamma handling accidentally works is text anti-
aliasing. It turns out that for small font size the wrong handling of
colorspace results in more readable text than the gamma-correct method only
for dark text on light background [1]. No wonder people don't like to use
light fonts on dark background (like terminals).

[1] [https://www.freetype.org/freetype2/docs/text-rendering-
gener...](https://www.freetype.org/freetype2/docs/text-rendering-
general.html#experimental-stem-darkening-for-the-auto-hinter)

------
hex12648430
What a coincidence, I just stumbled upon this article a few hours ago while
trying to find out which error diffusion algorithm is used by Photoshop out of
curiosity.

I've been playing with dithering recently to create braille art[0] and this
series of articles[1] by the libcaca developers has been a huge help. It also
goes over model based dithering algorithms which tend to give the best
results.

[0]: Example
[https://pastebin.com/raw/cRt4GL8j](https://pastebin.com/raw/cRt4GL8j)

[1]:
[http://caca.zoy.org/study/index.html](http://caca.zoy.org/study/index.html)

------
starmole
Two modern use cases where dithering is more important than ever:

\- Tone mapping hdr to display colors

\- Alpha style transparency in deferred rendering

Also one case I heard about but don't know much details is audio.

Dithering is important and powerful any time some precision is discarded.

~~~
dahart
Yep, Photoshop does dithering by default when converting from 16bpp (or
higher) to 8bpp. Many, many image processing apps & libraries don't do this,
and I wish they would!

This is a little different than the article though -- all the article's
dithers are good at half-toning, or 1bpp images. If your destination is 16bpp
images, or 16-bit audio, you don't need error diffusion, just a decent
sequence of random numbers.

~~~
colejohnson66
Why does one need dithering to go from 16-bit to 8-bit? Can’t I just shift the
value to the right by 8 bits?

~~~
dahart
Just to add to what @grenoire said -- if you only shift/truncate then there
are times when can see (or hear) the difference between 1 bit. You can
truncate, numerically speaking, but depending on what you're doing, the result
won't be very good.

When taking an image from a higher range down to 8-bits per channel, there are
two specific cases where you tend to see bad banding problems. When doing high
quality poster prints like a giclee or photo print, the printer's color gamut
is drastically different from a monitor, so banding that you don't notice on a
monitor can suddenly become a plainly visible eye sore in a print. This is a
big problem if you're paying $100 for your large format print.

The other problem is when resizing an image down. The resampling process
smoothes out noise in an image. Normally, a DSLR photo has a signal to noise
ratio that is less than 8 bits, meaning even 16 bit RAW photos have noise that
is stronger than the lower 8 bits. You can usually truncate the lower 8 bits
without noticing. But if you resample the image and downscale it, the noise is
smoothed and banding can appear. This can happen with photos of any large flat
field of color, like a wall or the sky.

------
nayuki
Previous discussion:
[https://news.ycombinator.com/item?id=11886318](https://news.ycombinator.com/item?id=11886318)

Another good page on image dithering:
[http://bisqwit.iki.fi/story/howto/dither/jy/](http://bisqwit.iki.fi/story/howto/dither/jy/)

------
temp45782
The article mentions ordered dithering but fails to list void-and-cluster and
similar variants thereof. Those parralelize really well (unlike error
diffusion), don't result in obvious patterns (unlike normal ordered dithering)
and can be run on gpus. It's quite useful to dither high bit depth videos to
8bit in realtime. Dithering HDR content has the benefit of not introducing
banding on SDR displays.

------
sanqui
Despite the criticism and glossing over in this article, I believe that
ordered dithering gives the most aesthetically pleasing result myself.

~~~
makapuf
Ordered dithering has also a very important feature : it's stable wrt.
animations. If you try moving a single pixel in an animation using floyd
steinberg, the error spread can be huge and has visible effects on the whole
image. OD, however, has a limited scope and is much more resistant to
animation.

See this image :
[http://bisqwit.iki.fi/jutut/kuvat/ordered_dither/jittest_flo...](http://bisqwit.iki.fi/jutut/kuvat/ordered_dither/jittest_floyd.gif)

now see the whole article at
[http://bisqwit.iki.fi/story/howto/dither/jy/](http://bisqwit.iki.fi/story/howto/dither/jy/)
, it's really good.

------
bluedino
Seeing a dithered image really brings me back to the mid 90's. Seeing a
dithered photograph reminds me of the early web or AOL. You used to remove
colors from GIFs to save space on web pages, as few as you could as long as
the image was still tolerable.

With a 1MB SVGA card, you could pick between 16-bit color at 800x600, or 8-bit
(256 colors) @ 1024x768. Did you value higher resolution, or not having to
palette shift every time you switch apps?

------
anonfunction
I was showing coworkers an example of the Floyd Steinberg algorithm today.

The following images have the same amount of colors, namely black and white.

[https://i.imgur.com/stQUl5E.gif](https://i.imgur.com/stQUl5E.gif)

[https://i.imgur.com/mw8IX9N.gif](https://i.imgur.com/mw8IX9N.gif)

Source image:

[https://i.imgur.com/diR72k2.jpg](https://i.imgur.com/diR72k2.jpg)

~~~
djmips
My suspicion is that you didn't handle the gamma correctly as mentioned in
another comment.

~~~
leni536
[https://imgur.com/595mX6m](https://imgur.com/595mX6m)

Done with imageworsener[1].

    
    
        imagew -cc 2 -dither f -grayscale diR72k2.jpg diR72k2_dither.png
    

[1]
[http://entropymine.com/imageworsener/](http://entropymine.com/imageworsener/)

------
nullc
[http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pd...](http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pdf)
is one of the best general papers I've read on the subject.

------
afandian
These images, especially the last one, bring back a tonne of nostalgia about
early 90s Apple Macintoshes and my HP Deskjet 310.

I wonder what artifacts of the limitations of modern technology will be
remembered with nostalgia by those growing up with today's equipment.

------
wiz21c
After having seen neural networks used to transform images, could we train a
neural network to dither images ?

(obligatory NN comment :-))

