
Image Dithering: Eleven Algorithms and Source Code - nkron
http://www.tannerhelland.com/4660/dithering-eleven-algorithms-source-code/
======
dahart
I still use dithering daily with 24 bit color images. In fact, my test for
whether a modern image editor is serious and good is whether it supports
dithering when converting from floating point color channels down to 8 bits
per channel. Photoshop does this.

The most important reason for me is not display on the monitor, but printing
the image. I do a lot of large format printing, and printers smash parts of
your color space and make some gradient/banding problems stick out like a sore
thumb. Dithering is critical when printing!

Gradients with banding can easily show up when you resize a large image down
to a smaller size. This is a good reason that resizing probably anything less
than 16 bits per channel images should be done in a higher precision format
than the image.

I haven't tried to use an error diffusion dither when converting 16 bits per
channel down to 8... I'm not sure how much that matters. It might make a
difference, and it sounds fun to code either way, but in my experience, a good
random number generator suffices to make color bands vanish.

------
bajsejohannes
> It also has uses when reducing 48 or 64bpp RAW-format digital photos to
> 24bpp RGB for editing.

The author touches upon it here, but I think it's worth generalizing further:
If you have high or maybe infinite precision in your color values, dithering
will look much nicer than simply rounding to the nearest value. A concrete
example is a color gradient. If done naively with rounding, color bands will
be clearly visible. With dithering, they will be almost impossible to see.

See for example:
[http://johanneshoff.com/dithering/](http://johanneshoff.com/dithering/)

------
the8472
The article doesn't mention the void and cluster[0][1] ordered dithering (and
modernized variants) which have the advantage of ordered dithering that they
are highly parallelization, e.g. via GPU shaders but do not leave the easily
spottable patterns of ordered dithering.

For example the madVR[2] video renderer uses it for realtime video dithering
to avoid banding shallow color gradients from 10bit sources or debanded
internal 16bit representation on <= 8bit displays.

[0] [http://cv.ulichney.com/papers/1993-void-
cluster.pdf](http://cv.ulichney.com/papers/1993-void-cluster.pdf) [1]
[http://www.hpl.hp.com/research/isl/halftoning/publications/1...](http://www.hpl.hp.com/research/isl/halftoning/publications/1995-filter-
extent.pdf) [2] [http://madvr.com/](http://madvr.com/)

~~~
c3534l
I find it amusing that someone would have a dedicated GPU, yet cannot fully
render color images. I mean, I'm sure there's actually plenty of use-cases,
but it's still funny.

~~~
pjc50
"Fully render" is a strange way of putting it - most consumer hardware has
only 8-bit colour depth, and _lots_ of video compression algorithms will turn
smooth gradients into blocky messes that would benefit from dithering.

If you're processing the decode at 10 or 16 bits and need to render at 8 bits,
it's _much_ better to dither than truncate.

~~~
the8472
> and lots of video compression algorithms will turn smooth gradients into
> blocky messes that would benefit from dithering.

More accurately: Either you have a 10bit-per-channel video (or better) which
has smooth gradients which will need dithering when rendering to an 8bit-per-
channel display.

Or you have 8bit video (more common) which will have to be passed through a
debanding filter first. But then how do you get the debanded result onto the
screen without reintroducing the banding? By dithering it.

------
harryf
A while back got curious about the approach used by apps like Manga Camera (
[https://play.google.com/store/apps/details?id=jp.co.supersof...](https://play.google.com/store/apps/details?id=jp.co.supersoftware.mangacamera)
) to turn photos into Manga style "drawings".

Turns out there's a paper on it "MANGAWALL: GENERATING MANGA PAGES FOR REAL-
TIME APPLICATIONS" ( [https://www.semanticscholar.org/paper/MangaWall-
Generating-m...](https://www.semanticscholar.org/paper/MangaWall-Generating-
manga-pages-for-real-time-Wu-
Aizawa/c027ee8729d44f3a08b5e7361689513123954ac6/pdf) ) and an implementation -
[https://github.com/zippon/MangaWall](https://github.com/zippon/MangaWall) \-
that implementation uses ordered dithering (
[https://github.com/zippon/MangaWall/blob/master/src/MangaEng...](https://github.com/zippon/MangaWall/blob/master/src/MangaEngine.cc#L206)
) among other things to help produce a pencil drawn like effect.

Anyway just saying ;) To me at least, pretty fascinating ...

------
akavel
For an interesting case of modern artistic use of dithering, see the
discussion related to development of indie game "Return of the Obra Dinn" [1]
(by Lucas Pope, author of "Papers, please"), where a participant contributed a
dithering scheme he invented [2] for a specific purpose of making faces of in-
game characters better looking and easier to recognize.

[1]
[https://forums.tigsource.com/index.php?topic=40832.msg121719...](https://forums.tigsource.com/index.php?topic=40832.msg1217196#msg1217196)

[2]
[https://forums.tigsource.com/index.php?topic=40832.msg121280...](https://forums.tigsource.com/index.php?topic=40832.msg1212805#msg1212805)

(Just in case and for easier browsing, I took liberty of uploading a copy to
github:
[https://github.com/akavel/WernessDithering](https://github.com/akavel/WernessDithering),
although I'm not clear on what's the license of the code, unfortunately; that
said I hope the numbers in the matrix are not patented.)

( _edit:_ lol, didn't notice there's another thread on Obra Dinn already on
HN, now I'm surprised! :)

------
1wd
A fun (if not terribly effective) algorithm that's missing here is dithering
along a Hilbert curve.

[http://caca.zoy.org/study/part3.html](http://caca.zoy.org/study/part3.html)

~~~
derefr
Thanks, this is just what I was wondering about after reading the original
article.

------
richard_todd
I don't know if it's some kind of nostalgia factor (since a lot of old
16-color EGA software used it, I think), but I find ordered dithering output
strangely appealing. I have also used it sometimes where size mattered more
than style on the assumption that GIF/PNG compression rates would be better on
ordered dithers. ImageMagick can do a lot of dithering styles and it's fun to
play with.

------
nullc
This thesis on image dithering and noise shaping is one of the best works I've
read on the subject:
[http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pd...](http://uwspace.uwaterloo.ca/bitstream/10012/3867/1/thesis.pdf)

~~~
saquibhafiz
Professor Vanderkooy is a great source for dithering, and has some wonderful
work on it too (ps. fellow uw student)

------
RiscyAcorn
Last year I tried painting some Floyd-Steinberg dithered pixels...
[https://flashasm.wordpress.com/2015/11/04/more-incredibly-
sl...](https://flashasm.wordpress.com/2015/11/04/more-incredibly-slow-
rendering-algorithms-cupcakes-pixel-art/) (16 colours) and
[https://flashasm.wordpress.com/2016/05/18/81920-pixels-
of-64...](https://flashasm.wordpress.com/2016/05/18/81920-pixels-of-64-colour-
cupcake-goodness/) (64 colours)

------
1wd
Another fun fact I always think of when dithering comes up is how Tim
Sweeney's software rendered or the original Unreal game used dithering instead
of bilinear interpolation for texture mapping. This was very impressive at the
time.

[http://www.flipcode.com/archives/Texturing_As_In_Unreal.shtm...](http://www.flipcode.com/archives/Texturing_As_In_Unreal.shtml)

------
huuu
It's a kind of funny those images look nice on high res screens.

Some time ago I experimented with 16-32 color images on websites. Because of
high res screens they look great but save a lot of data.

------
IvanK_net
Few years ago, I created a real-time dithering of video in Javascript:
[http://blog.ivank.net/floyd-steinberg-dithering-in-
javascrip...](http://blog.ivank.net/floyd-steinberg-dithering-in-
javascript.html)

------
rasz_pl
Neat facts:

-3DFX Voodoo (all models in 16bit color depth) let you enable hardware dithering block (2x2/4x4 ordered dither, zero performance penalty). They did it to save framebuffer space (24bit textures, 16bit framebuffer). It made 3dfx graphics look significantly better than nvidia/ati in 16bit depth. Earlier cards used 4x1 filter on the output, Banshee and later models gained 2x2 filter providing "22 bit like quality" as 3dfx called it.

-Back in ~1994 some HP unix workstations used dithering to produce 'near 23bit color' out of 8bit framebuffer [https://en.wikipedia.org/wiki/HP_Color_recovery](https://en.wikipedia.org/wiki/HP_Color_recovery)

-All crappy (TN) LCD panels (staple of garbage bin supermarket 1366x768 laptop, and older 'gaming' fullhd ones) use FRC which is a form of temporal dithering [https://en.wikipedia.org/wiki/Frame_rate_control](https://en.wikipedia.org/wiki/Frame_rate_control)

One legitimate use of dithering was in the days of CGA/EGA/other fixed palette
hardware. You can relive it here 'Joel Yliluoma's arbitrary-palette positional
dithering algorithm':
[http://bisqwit.iki.fi/story/howto/dither/jy/](http://bisqwit.iki.fi/story/howto/dither/jy/)

------
semi-extrinsic
This article is really nice. I used it as my starting point when I once won a
code golf competition on image quality in dithering. Using Fortran.

The algorithm is based on Sierra Lite, but I added a random element to the
direction in which the error is propagated. This removes essentially all
dithering artifacts.

[http://codegolf.stackexchange.com/questions/26554/dither-
a-g...](http://codegolf.stackexchange.com/questions/26554/dither-a-grayscale-
image)

------
joosters
Has anyone tried using the equivalent of animated GIFs to help with dithering?
If you have a limited palette of colours, perhaps you could produce two
dithered versions, with pixels sometimes having different colours in alternate
frames. If the image is refreshed fast enough, two colours could blend into a
third.

The most extreme example could be 'dithering' a grey square into 1) a black
square and 2) a white square, and when they are rapidly switching between the
two it might appear as a grey instead.

I guess that in practice, the refresh rates of monitors are too low to make it
seem anything other than a terrible flickering image, but on old CRTs the
effect might work a little better. It's also memory and CPU intensive, but I'd
still be curious to see if it could be used successfully, and if it improved
the quality compared to 'just' a single dithered image.

~~~
zero_iq
This was especially common on 8-bit and 16-bit computers to fake 'high colour'
displays on machines that had only limited colour capability. They often took
advantage of the fact that people were using their machines with TVs with
relatively slow-changing phoshor screens, so the flicker was less evident than
on a modern monitor.

e.g. Photochrome on the Atari ST, was especially impressive at the time:
[https://www.youtube.com/watch?v=vPsY4P8bnVw](https://www.youtube.com/watch?v=vPsY4P8bnVw)

The most extreme version I've seen of this was on the ZX Spectrum, which had
not only a very limited 15 colour palette, but also limited to 2 colours
within each 8x8 block of the screen. Some bright spark came up with the idea
of flipping rapidly between R, G, and B frames to give (limited) per-pixel
RGB. Unfortunately it did flicker quite badly because of the extreme changes
in colour levels (only two levels of each channel), and the fact that it
required 3 whole frames to make a single colour virtual frame.

Example here: (not suitable if you have photosenstive epilepsy!)
[https://en.wikipedia.org/wiki/File:Parrot_rgb3.gif](https://en.wikipedia.org/wiki/File:Parrot_rgb3.gif)

~~~
joosters
Wow, that example parrot image is pretty impressive, given that it is just
three colours, and I'm viewing it on an LCD!

I never knew anyone had tried that before on a spectrum. At first I thought
you were just talking about the other trick, getting more than 2 colours per
8x8 by changing the palette as the raster scanned down the screen. The multi-
colour parrot is way more adventurous!

I'd love to see the parrot image on an old CRT to get a feeling for what the
effect might look like with the phosphor afterglow. Leaving the spectrum
behind and using a bigger palette range, like on the ST, the effect seems much
less epileptic fit inducing, because you can pick closer colours to switch
between.

------
chhabrakadabra
This was a great read. Very approachable.

------
dividuum
There is another interesting application for dithering that I've read about in
the recent Uncharted 4 Brain Dump (Ctrl-F for "Dithering") here
[https://redd.it/4itbxq](https://redd.it/4itbxq): Use dithering instead of
alpha blending to fade out close objects. Alpha blending can be quite
expensive while dithering just omits pixels. The result looks like this:
[http://allenchou.net/wp-
content/uploads/2016/05/dithering-1-...](http://allenchou.net/wp-
content/uploads/2016/05/dithering-1-1024x576.png) (best visible at the top
left corner).

~~~
rasz_pl
this is ugly as heck, sega saturn did the same

[http://www.mattgreer.org/articles/sega-saturn-and-
transparen...](http://www.mattgreer.org/articles/sega-saturn-and-
transparency/)

------
seanwilson
Great and easy to follow article!

> For simplicity of computation, all standard dithering formulas push the
> error forward, never backward. If you loop through an image one pixel at a
> time, starting at the top-left and moving right, you never want to push
> errors backward (e.g. left and/or up).

Would the image look a lot different if you dithered it backwards from the
bottom right pixel?

Are there dithering algorithms that consider the error in all directions
instead of pushing the errors forward only?

~~~
seanwilson
Answering my own question; "3.3. Changing image parsing direction":
[http://caca.zoy.org/study/part3.html](http://caca.zoy.org/study/part3.html)

The answer seems to be that changing the image parsing direction gets rid of
some artifacts but introduces others while not vastly improving on faster and
simpler approaches.

------
rabidsnail
Also if you don't know what your target palette is you can do k-means
clustering over colorspace, and the palette is the cluster centers (each point
you're giving to k-means is an <r,g,b> or <h,s,v> vector).

~~~
tixzdk
K-medians or k-medoids yield more pleasing results in my experience

------
mynegation
Back when I was at university, we had a computer graphics course. Each week we
got new assignment that we needed to program, write to a floppy disk (it was
before Internet became widely available) and give it to the prof next week.

Dithering was one of the assignments. We were required to implement black-
white quantization and then Atkinson and Floyd-Steinberg. We were given the
freedom to choose our own images.

During development at the dorm my favourite picture to debug on was pretty
racy (think along the lines of full version of "Lena"). I totally did not
intend to put it to the floppy disk...

Not only I got the 10 - the highest number of points for this assignment, I
got +2 on top of that with the comment from prof: "for the choice of test
images in the best tradition of the field".

~~~
startling
Gross.

~~~
mynegation
I understand your sentiment. I did not mean to glorify objectification of
women with this story. Just something mildly funny that happened almost two
decades ago, in a country without a strong tradition of feminism, to a 17-year
old me who did not know any better then.

~~~
xiphias
How can an virtual object (a photo of a woman seen on a monitor) be
objectified? The model meant that photo to be used as an object (looked at).
If she wanted people to understand her personality more, she would have just
written a book.

~~~
startling
These are the kinds of things, specifically, that gross me out about this
story:

(from
[http://geekfeminism.wikia.com/wiki/Sexualized_environment](http://geekfeminism.wikia.com/wiki/Sexualized_environment))

* In geek contexts, they are usually a way for heterosexual men to bond over their common attraction to women. This is othering for anyone who is not a heterosexual man, including, obviously, women, and contributes to their invisibility in the field. This sensation of exclusion is very visceral when in a small minority, as women can be in geek settings.

* There is a long tradition of sexual images, suggestions and approaches being used to shame, scare, harrass or brutalise women. This is common enough that most women will have had personal experience of it. Therefore many women are unable to sensibly assume good faith on the part of unknown men seeking to make a situation sexual and feel mentally uncomfortable at best and physically intimidated often.

* While in many areas this restriction is loosening, women are stigmatised as well as celebrated for being too sexual. This traps women into a double bind when responding to sexualized environments, because even by getting the joke they may reveal themselves as too sexual.

Keep in mind the commenter received _extra points_ for this.

~~~
spiderfarmer

      This is othering for anyone who is not a heterosexual man
    

I disagree. Women look at other women as well. It's very natural to feel
excluded when groups of people over something you don't get excited about,
whatever the subject. If, for example, people can contain their excitement
about Magic cards when I'm around, I'm fine.

    
    
      There is a long tradition of sexual images, suggestions and approaches being used to shame, scare, harrass or brutalise women. 
    

Yeah, so a picture of a professional model is something completely different.
You are dragging this point into the discussion but it doesn't add anything.

    
    
      ...they may reveal themselves as too sexual.
    

Just bring it back to basics. Men have dicks they want to use, women want to
be admired. If people could just respect eachother while taking these basic
needs into account, everyone would be fine.

\--

I am happy that people here in the Netherlands are so much more tolerant when
it comes down to things like sexuality, public intimicy and nudity. It seems
to me that in the US people get offended very easily and want to limit another
persons freedom of expression just so that they can be offended a little bit
less.

~~~
rabidsnail
Forget abstractions for a second; there are people whom this makes
uncomfortable (we know because they said so), and the cost of that is much
higher than the very small benefit of using one image over another as a test
image. Simple cost/benefit analysis says to use something else.

~~~
EdHominem
Obviously in certain circumstances only - few of us here would censor Charlie
Hebdo.

We only care about discomfort that we think is reasonable, or more accurately,
not a lie designed to control the actions of others.

In this case, most people agree that sexualizing the workplace is a strong
negative and are in agreement with you. But as a general rule, you'd have much
less support.

~~~
rabidsnail
General rules aren't good for very much other than arguing with people on
hacker news.

------
ja27
I sense a disturbance in the force. It's as if 1,000 BitCam clones were just
born.

