Hacker News new | past | comments | ask | show | jobs | submit login
Ditherpunk – The article I wish I had about monochrome image dithering (2021) (surma.dev)
255 points by samwillis on May 22, 2023 | hide | past | favorite | 37 comments



Nice historical nugget from wikipedia on dithering:

> Etymology

> …[O]ne of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb "didderen," meaning "to tremble." Today, when you tap a mechanical meter to increase its accuracy, you are applying dither, and modern dictionaries define dither as a highly nervous, confused, or agitated state. In minute quantities, dither successfully makes a digitization system a little more analog in the good sense of the word.


"Dithering Heights" was the name of the ColorSync team's lab at Apple in the 1990's.

Conference rooms and lab rooms often had amusing names — as I am sure is true at other companies besides Apple. "Rock" and "Hard Place" were two neighboring conference rooms.

Another floor as I recall had conference rooms named with oxymorons, "Military Intelligence", "Jumbo Shrimp" and — was it "Lawrence Expressway"? — something funny to Bay Area residents anyway.


On Lawrence Expressway, the carpool lanes are the rightmost lanes. It makes about as much sense as it sounds like.


Dithering ils also used in signal acquisition: adding some (known) noise before the sensor can help the signal reach the sensor's threshold. It can also help with quantization noise. Of course, that degrades the signal/noise ratio, but the SNR can be improved later by filtering out the spectral density of the noise, averaging multiple captures, making use of CRC, etc.


I don't buy that etymology, even in the expanded context. It seems like a dutch pronunciation of titter/teeter, and an amalgamation of a few words that became really popular and "cool".

I might dig out my norse etymology dictionary and a magnifying glass and try to trace it back further.

-- From Etymonline:

Teeter:

1843, "to seesaw," alteration of Middle English titter "move unsteadily," probably from a Scandinavian source akin to Old Norse titra "to shake, shiver, totter, tremble," from Proto-Germanic *ti-tra- (source also of German zittern "to tremble"). Meaning "move unsteadily, be on the edge of imbalance" is from 1844.

[...]Noun teeter-totter "see-saw" is attested from 1871 (earlier simply teeter, 1855, and titter-totter in same sense is from *1520s*). Totter (n.) "board swing" is recorded from late 14c.;


Dutch people are notorious for having issues with the th sound, which does not occur in the Dutch language and requires practice. They also don't "soften" leading consonants, they "harden" trailing consonants like zand pronounced zant. Thus I don't see why your explanation would be likely.

Source: am Dutch, would never pronounce titter as dither even with a bad accent. More likely the other way around actually.


"Dither" has been a word for "to move uncertainly" in Scots for several hundred years at least, so I think you might be wrong there.


I think the "Hacker News readers" and "People who like dithering" Venn diagram has converged slowly over time. I'm a happy resident of the overlapping zone.

A classic visual explainer from the team behind Myst and Riven (who had to balance image quality and disk space) is archived here: http://web.archive.org/web/20230415173939/http://cho.cyan.co...

Obligatory link to Obra Dinn's fascinating dev log post, regarding the challenge of spatial and temporal coherence when dithering realtime 3D (for aesthetic reasons, ie. deliberately noticeable to the player): https://forums.tigsource.com/index.php?topic=40832.msg136374...

Lately I've been attempting to add the original dithering from Excel 97's easter egg to my online reproduction. In the era of indexed-color graphics, developers had to dither efficiently to reduce banding. Compare these two rendering techniques of the same subject, one with 16 shades of gray and dithering, and the other with 256 shades of gray:

https://rezmason.github.io/excel_97_egg/?shadingOnly=true&l=...

https://rezmason.github.io/excel_97_egg/?shadingOnly=true&l=...


Definitely check out the Obra Dinn dev log post, it shows gifs of the different approaches Lucas tried, and highlights how difficult it is to make dithering in 3d appear at all decent while the camera is moving.


If you're reading this from the future, I've changed the URL params in my project. Up to date ones should be in the about page:

https://rezmason.github.io/excel_97_egg/about.html


If anyone is interested in playing around with dithering, I built a tool for it.

https://doodad.dev/dither-me-this/

I actually wish I had Surma's article before I made it — would have given me the knowledge to add blue noise and Riemersma as options.


Shameless plug, but after reading this blog post the last time it was posted, I tried to implement some of the same dithering methods in Futhark[0]. You can see the results here, if you're interested: https://munksgaard.github.io/bluenoise/

By the way, that is a literate Futhark program, so the markdown is generated from the source file found here: https://github.com/Munksgaard/bluenoise/blob/master/bluenois...

[0]: https://futhark-lang.org/


If you're looking for a dither to use in applications or games, the ones offered in this Oculus blog post (meant for use in realtime to improve image quality in VR) may fit the bill: https://developer.oculus.com/blog/tech-note-shader-snippets-...

By varying the dither pattern each frame they are basically doing the equivalent of the hardware FRC in monitors to give you an extra fake 9th bit (for 8-bit rendering) of precision - as long as the framerate is high enough, the dithering becomes nearly invisible. Not that it's super visible to begin with. I'm personally using Dither17.


Blue noise dithering also gives some pretty nice results: https://momentsingraphics.de/BlueNoise.html


would these artifacts make video compression less effective?


I remember coming upon this when trying to use PCoIP for remote access to workstations after the win7 to win 10 transition. It's a setting somewhere in Windows10 (maybe with specific video cards?) that uses dithering to improve image quality when you have a monitor that doesn't support HDR.

This absolutely wrecked the bandwidth as the whole screen was trying to refresh at 60Hz in a way that the PCoIP algorithm wasn't prepared for.

https://en.m.wikipedia.org/wiki/Teradici


They're usually flipping the least important bit, so the video compressor will probably ignore the dithering. It does turn a perfectly stable image into an unstable image, though, so it could potentially be harmful if you're compressing video of static text and geometry - I haven't tested this before, so it's an interesting point.


They can make it more effective because they reduce the bits per pixel from e.g. 12 to 3.


Stippling is nice for stylistic reasons. Normally distributed noise is best if you want maximally efficient error diffusion.


Even if posted more than once on hacker news, there is a interesting forum post [1] from Lucas Pope, the author of Obra Dinn, describing dither mapping in 3d. [2] (2017) has some more links

[1] https://forums.tigsource.com/index.php?topic=40832.msg136374... [2] https://news.ycombinator.com/item?id=15766249


Every time I read a Lukas Pope post, I just want to clap. He is someone who truly grasps the intersection of experience and ingestion. If anyone could embody a digital artist, it would be him, since he seems to work to create things with the full intention of how they’re seen. A true hacker.


I remember dithering from Samantha Fox Strip Poker on the C64. It wasn't very good, but when you have an active mind who needs photorealistic graphics.


From the pictures of error diffusion dithering it looks like they are consistently applied from row-by-row in the same direction (probably left to right).

There is a variation of these where every odd row is processed in one direction (left to right) and every even row is processed in the other direction (right to left). It usually produces a more visually appealing result.


Pretty nice discussion over here too, for the same article, two years ago: https://news.ycombinator.com/item?id=25633483 (2021)

(Pointing this out/linking in best faith for references)


I thought I had commented on that thread but ... apparently not.

Anyway. I built my own hand-rolled dither filter for my canvas library - see the CodePen demo to see it attempt to dither a live video stream (warning: site will ask for permission to use your device's camera before displaying the canvas) - https://codepen.io/kaliedarik/pen/OJOaOZz

The problem I have with the filter is that it's not consistent across video frames (the dither jiggles around, even when nothing is moving between frames). It's really annoying and I don't know how to solve it. Does anyone know of any research/solutions for this sort of thing?



I have seen that page before - an excellent writeup on how to morph the dither pattern to a sphere and match it to camera rotation to get rid of the dither jank as a character/camera moves through the game scene. But I don't think it applies to my particular issue of seeing dither jank on a webcam livestream.

Maybe there's a way of adapting my algorithm to give it some memory of previous frames to help minimise the jank that occurs as the current frame's output is calculated. Or identify static parts of the current frame compared to the previous frame and only recalculate the dither for the pixels that have changed beyond a minimal color distance threshold?


Wow. I didn't expect visiting HN to read about my Dad (Bryce Bayer) again!


At the time when image viewers used to change screen palette, resolution and color depth just to show a picture, and decoding compressed images came with a progress bar attached, people regularly did want to get dithered black and white or 16- and 256-color images from better and bigger sources. Desktops only became True Color when VRAM started to be measured in megabytes, and there was no point in storing or transferring enormous files to use as illustrations or wallpapers. I remembered an artifact from the palette color era, shareware application called “Graphic Workshop” (GWS) with extensive dithering options, and found it among conveniently emulated software on archive.org.

https://archive.org/details/win31_festival_GWSWIN10

https://archive.org/details/msdos_festival_GWS61G

It would be great to load other files instead of example images. In fact, archive.org stores virtual file system changes on client, and files remain available across sessions, but there seems to be no way to interact with those virtual Dosbox mounts, or to transfer any data except for typing it.

Readme of 1993 version: https://rentry.co/gwsreadme


I still find Atkinson dithering the prettiest. Here's an old Python2 implementation: https://gist.github.com/radupotop/ea41c108de0cb987eacff6dae2...


I made a C++ project[1] (using OpenCL and blue noise) to generate dithered images/video for a course project (the course taught how to program for the GPU). If anyone wants to just jump straight into generating dithered images, you can try it out yourself. (It only works if OpenCL is configured for your system. It works on my desktop with a GPU and does not work on my laptop without a separate GPU.)

EDIT: Oh right, it also requires FFmpeg to be installed as it uses it. And libpng. And CMake. And PkgConfig.

[1]: https://github.com/Stephen-Seo/EN605.617.81.FA21_StephenSeo_...


Seems like a good application for deep learning. None of the dither algorithms are a match for what seems to be the handmade dither in the game screenshot.

Or at least, we could boost the high frequencies on the image somehow before putting it in, to preserve detail, and add some randomness to the diffusion algorithms?


Agreed on one of the best introductions to dithering.

Shadertoy has also been a good site to quick experiment with dithering using shader code: https://www.shadertoy.com/results?query=tag%3Ddithering


See also https://en.wikipedia.org/wiki/Halftone, a technique used in the print world to achieve a similar range of shades and colour using a small base set.


See also: List of Color Quantization Algorithms <https://news.ycombinator.com/item?id=26646035>


It’s kind of interesting that Louis Steinberg does not have a Wikipedia page





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: