FTA: I have ways liked the look of images processed with Atkinson Dithering, the algorithm used in the original Apple Macintosh.
Nitpick: I don’t think that was used in the original Mac and can’t think of what OS call would use it. I also don’t think MacPaint implemented it (some of the pictures it shipped with may have)
Other indications it wasn’t used in the original Mac:
I experienced it first when using a flatbed scanner connected to the Macintosh (not the Thunderscan, however — I guess a little later).
By the time Hypercard made the scene you started to come across Atkinson Dithered images with regularity (likely much of the clipart that came with Hypercard).
I loved the effect as well. I made it use of it at the time but then had a more recent project where I wanted that "vintage look" and so have made use of it recently as well. I think it's the "tube amp" of dithering algorithms.
The dithering noise almost often suggests a pattern — not unlike the old metal engravings where the artist cut the lines in not an arbitrary cross-hatch but to follow or suggest contours. I don't know, I can't explain the appeal.
I don’t think that was used in the original Mac and can’t think of what OS call would use it.
You're spot on. Images on the original Mac were all 1-bit arrays. There was no concept of 'color' or 'grey' or any way to translate that to-and-from the single pixel images.
Doing it for the scanner project, which did look at levels of grey, makes more sense. Those scanned images ended up in later Mac projects like Hypercard.
Utter nitpick: there were a few system calls for future support of color devices. The idea was to support up to 32 bitplanes (https://en.wikipedia.org/wiki/Bit_plane).
Inside Macintosh says: “if you want to set up your application now to produce color output in the future, you can do so by using QuickDraw procedures to set the foreground color and the background color.“
I don’t think that ever was used, though. When color arrived, they (rightfully; bit planes are slow for about anything but transparent overlays) went for packed pixels (https://en.wikipedia.org/wiki/Packed_pixel)
"Atkinson dithering is a variant of Floyd-Steinberg dithering designed by Bill Atkinson at Apple Computer, and used in the original Macintosh computer."
I don't recall an obvious way for ordinary users in the original Mac days to dither images. But it was difficult/impossible for ordinary users to capture an image back then, anyway. The ThunderScan is the first consumer device I remember for real-world image input.
On the other hand, the Apple II came with a demo disk that had half a dozen or so scanned images and they had that dithered look. So there were at least experimental scanners in existence circa 1981, when we bought our Apple II.
I had fond memories of Atkinson dithering from my time on early Macs as a child. That led me to create the ultimate 1-bit dithering Mac app (it even exports to MacPaint format so you can display photos from your modern Mac on your '80s B&W Mac). I wrote about the experience of implementing Atkinson dithering here:
I am the author, this was my weekend project. The first released version had an embarrassing bug that prevented it working properly on some browsers when zoomed. This is especially annoying since I put some effort into accessibility.
It is fixed now on the site (refresh to clear your cache). The repository will have the fix soon (edit: just pushed!)
This is lovely. I really like the style of the outputted images.
A totally pointless and unnecessary technical suggestion: it should be possible to use an img tag with an attribute in the HTML src, and do the work using a web worker with an offscreencanvas doing the same work as your canvas is, convert to a blob, and pass that back to the main thread to replace the src of the img. That would push all the work off to a separate thread so you could have loads of them on a page without slowing things down. The only consideration would be whether you can copy from an img tag to an offscreencanvas efficiently. That might be blocked by CORS stuff.
Thanks for your feedback. I originally wanted to override the img tag (seems obvious right?) but it turns out you cannot get access to the image data for security reasons, even from a class that extends image, even for images from the same domain.
It would be possible to have an external script the post processes img files on the page, but then you lose the nice as-dithered-image encapsulation. It may be a better way to go in practice though.
Using a web worker is certainly something I considered and maybe I will get around to implementing it. Copying data as you suggest is possible but I found that doing it on the main thread was "fast enough" for a reasonable number of images.
Instead of loading in the page, and then accessing that image, disable the loading entirely by using e.g. `data-src` instead of `src` in the HTML. You then fetch() the URL in that data attribute in your Javascript.
Note that this does mean the images will be delayed from loading, and will cause a layout shift when they're finally inserted. It's best to inline the image's height & width into the HTML if you know them in advance (or use `aspect-ratio`): https://www.aleksandrhovhannisyan.com/blog/setting-width-and...
[0] Just as a note, the tutorial says "<img> tags actually block your application load," which is very wrong.
It should be noted that Atkinson dithering is just a variation of Floyd-Steinberg dithering. The concept is quite simple, you take the error at a pixel and distribute it to adjacent pixels (the sum of all error parts must be 1). You can actually create your own dithering algorithm quite easily, but I doubt that it will be better than the existing ones.
r
Here is a great visual explanation of both algorithms with a comparison. Implementing them is a nice and quite simple project.
We can control the pixel size the dither algorithm uses. In this case the "crunch" is set so to produce a one-to-one match between screen and image. On high DPI screens this gives a finer image, maybe too fine on really small screens. ... Of course we can go the other way as well.
If you open up your inspector and change the "crunch" value then it re-renders immediately.
Cool concept, and nicely packaged as a web component! As others have mentioned it doesn't when changing the zoom level.
I see the author is aware of this making the UI jank, but there are performance issues. Currently every time there is a resize, there is a call to getImageData from the canvas and then multiple dither calls totaling close to ~150ms. The getImageData alone is taking an entire frame to compute every redraw.
This was my weekend project. I have figured out the problem - Firefox and Chrome (but not Safari) report a fractional devicePixelRatio when Zoomed, which the dithering code cannot handle properly.
I had not anticipated this - I'll fix it in a couple of hours.
In the meantime, you can reset your zoom back to actual size to get the effect.
Nitpick: I don’t think that was used in the original Mac and can’t think of what OS call would use it. I also don’t think MacPaint implemented it (some of the pictures it shipped with may have)
Other indications it wasn’t used in the original Mac:
- “the phone book” (https://www.folklore.org/StoryView.py?project=Macintosh&stor...) doesn’t have an index entry for ‘dithering’)
- Andy Hertzfeld says he implemented the algorithm for the ThunderScan software (https://www.folklore.org/StoryView.py?project=Macintosh&stor...)