One is effectively posterizing a grayscale image, and his primary goal was to reduce artifacts drawing unwanted attention to the borders between poster levels.
With improvements in hardware other approaches to dithering took over. The last time I saw my father's pattern was on a DEC box logo. He moved to the color group at Kodak, and designed the "Bayer filter" used in digital cameras.
I think this is one of the better, well my dad can beat up your dad type of stories. I know you absolutely didn't mean it that way, but there's a part of me that read it that on the second reading of it.
If you think that’s good, also check out further down in this thread. Gotta love HN for finding people that wrote old libraries. In the below case, it’s a library for using grayscale on a TI83.
Kudos to your dad! Every time I explain how a digital camera works to people, invariably it's the concept and application of the Bayer filter that causes their jaw to drop.
Most people think 50 megapixels = 50 red, 50 blue, and 50 green megapixels, so it's quite eye-opening. That our eyes work similarly with cones tuned to specific frequencies is just icing on the cake after that.
Yeah, realizing the actual image is only about 1/4 of the resolution of the sensor is something not everyone grasps. Marketing of course doesn't try to explain it either. I tend to take it too far by then explaining the benefits of using 3-chip monocolor sensors (or if not real-time, a single sensor triple flashed) to get the full resolution for each color channel. These are usually CCDs instead of CMOS, but the theory is the same. This is how most astronomy things work, like the Hubble, but instead of RGB, they use other filters that get "colorized" into RGB-ish.
> Marketing of course doesn't try to explain it either.
I remember early CCD digital camera brochures stating "actual resolution", and "enhanced resolution" (or something similar), where the latter is ~3x the former number. I wonder if this was referring to the demosaicing procedure.
Nope, since the early sensors were very low resolution, many cameras included interpolation engines inside, and some of them were pretty good at what they did.
They used blurry language to thinly veil the fact that the 1024x768 image you got indeed started its life as 640x480 one out of the sensor.
sure, and i knew the pedantics would beat me up over it (maybe rightly so, but i tried to use a number hyperbolically to make it obvious). however, it's not the full resolution of the chip. hence the use of monochrome chips for "full" resolution.
It's not about being pedantic, but that's literally the whole point of the Bayer process (which is what was being discussed).
If we were fine with getting 1/4 resolution output we'd just put one color filter for each sensor pixel and don't Bayer demosaic anything and call it a day.
> I find this to be the most elegant way to handle dithering
Yes, it's so simple, it can be applied in a single pass on a single pixel buffer. Because in convolution kernel terms - it's only sampling from half of a moor neighbourhood, and that half can be from pixels not yet processed in the same buffer when moving through them in order.
> I've been looking to build an engine/game that approximates this art style: https://obradinn.com
Killedbyapixel took the above dweet for inspiration in the style of some of his proc gen art, although I haven't dug into the how yet. I suppose deeper game/object awareness integration could produce better results than merely piping the output of the renderer into the dither algorithm, perhaps even the rendering could be optimized by targetting dithering specifically.
> perhaps even the rendering could be optimized by targetting dithering specifically.
I was wondering about this possibility as well. I'm already up to my elbows in hand-rolled software rasterizer - which is already constrained to only using a luminance channel (8-bits).
I suppose I could still accumulate dither error and do a one-pass, 1-bit raster by combining the world-space 256-level luminance information with whatever error information is in the accumulator.
> I suppose I could still accumulate dither error and do a one-pass, 1-bit raster by combining the world-space 256-level luminance information with whatever error information is in the accumulator.
That would be interesting to see. I like the idea of renderers that give up things e.g resolution in exchange for something else. Pixel depth is just another. Would be interesting what gains in other areas might be possible if the rasterisation stage turns into 1-bit. Then again the cost of actually operating on single bit stored values might out weight any gains... unless this is done in hardware.
Think about it like this... In 1-bit dithered scheme, 1 machine word represents 64 entire pixels worth of values. You can access 64 pixels at a time and use bit-wise operations to manipulate the individual values.
Compare to a typical 24-bit RGB array, you just reduced your full-scan array access requirements from 3 per pixel to 1/64 per pixel.
For 720p (1280x720), you'd only be talking about 14,400 loop iterations to walk the entire frame buffer.
You could almost get away without even compressing the frames in some applications. At 30fps, totally uncompressed, 720p 1-bit dither would come in right at 30 mbps.
And for 1-bit, Atkinson dithering can produce good effect. Floyd-Steinberg and Atkinson are the only two I'd usually consider for 1-bit dithering, with a preference for Atkinson for most images.
Ordered dithering, even at 8-bit, is…not my cup of tea.
One more relevant use case: I've found Floyd-Steinberg dithering to be somewhat useful in compressing scanned JPEG paper notes at 600dpi (~1.5-2.5MB per page) to 1-bit PNG (100-300KB).
At full scale, there is no loss in readability and in fact I think I prefer the aesthetic quality of the 1-bit PNG. However, at <50% zoom, the dithered version was way less readable than a 2-bit PNG of slightly higher file size, so I ended up not compressing to 1-bit.
Edit: I was wrong, my (ex-) image viewer was at fault at 50% zoom. Viewing in other apps, the dithered version is visually no different from the 2-bit version. I bet the difference is even less with a HiDPI screen.
Wow! I worked on digital camera chips and had coworkers writing demosaic algorithms. We wouldn't have even had those jobs if it wasn't for your father.
now that is something that could be interesting. maybe change from squares to diamonds for effect, but i like the suggestion! a stained Bayer glass window makes it sound like it should be found in medieval cathedrals
Wow, I deeply remember seeing these patterns in my relatively early computing days. I think it primarily was in Windows 3.x, back when graphics modes with 16 colors were still normal.
It definitely was not Floyd-Steinberg or anything else using error diffusion (which however I also remember seeing in those days, from some Windows applications' splash dialog for example), because there were these very characteristic stable patterns.
During countless hours staring at the dithered blue gradient of InstallShield, for example.
Seeing that brings back a lot of nostalgia for the old PC Paintbrush that was once ubiquitous on DOS machines. The images it turned out were really great for the time, and I sometimes miss the aesthetic of those dithering patterns.
Ordered dithering is great. I use it in my video player, it looks good and is a fast alternative to error diffusion which requires more hardware resources.
Haha, thanks for dredging this up, I made that :) It seems quite related to the OP article. It's a library for making grayscale games on the TI83 graphical calculator, which has a monochrome display. The main challenge was optimizing the interrupt routine, the z80 assembly code that performed the flickering/dithering that achieves the grayscale effect, to fit within the tiny amount of clock cycles available on the Zilog Z80 6 MHz processor. Even after optimization, it took up ~50%-75% of all available cycles. Some people managed to make some pretty fun grayscale games with it (e.g. https://www.ticalc.org/archives/files/fileinfo/331/33153.htm...). This was obviously in the pre-smartphone era, so the ti83 was quite popular for playing games in class, and hand written assembly code was the only way to make it fast.
I knew immediately that the link was to Desolate before opening it :) It blew me away the first time I played it, as I was used to my shitty turn-based TI-BASIC games.
Holy crap! This takes me back. I used this (or something based on it) on my TI84 SE+ to play around with grayscale over a decade ago. It's the first thing that came to mind when I saw this article. I never got into ASM on the TI calcs but I wrote a TON of TI-Basic. I spent a ton of time on those forums and posted a number of apps/games I wrote (though I can't find them now).
Cool to see that one of the authors of the most cited paper in machine learning also got their start hacking on TI calculators. What a great learning environment that was!
Not that anyone hadn't thought of it before, but I believe that I may hold the first claim of a grayscale TI calculator demo. I released a four-frame, 4-bit grayscale animation of beavis & butthead headbanging for the newly minted Zshell on TI-85 in 1992. I posted it to one of the listserves or usenet groups, but I have never been able to find a copy in anyone's archives. I'd love to see it again. It was not a fancy program. The frames were stored as 4 x 4 x 64x128 bitmaps = 16KB so it consumed like 2/3 of the calculator's memory. Fun times.
If anyone is a usenet archive ninja, my program was called 'BBANIM' and there was a TI-basic POC and zshell asm version released.
I recall that the first game using PoV grayscale was "Plain Jump" (sic) shortly afterwards which apparently continues to be a popular project to clone.
Nice final result but I would have skipped the PWM detour. PWM sucks and has known bad aliasing issues. Sigma-Delta (AKA "1-bit DAC") is error diffusing, cost the same to implement, and should always be preferred IMO.
A lot of "tricks" from the past were accommodating the slow processing powers, but if you are driving this from, say, an FPGA, you there's no reason for not just doing the best thing.
One thing I didn't see mentioned: the reason this all works is because the liquid crystals don't turn very fast so effectively they are providing a filter that implements the gray-scale output from a binary input. What I'm curious about is if this is a linear process. Based on the PWM result it looks pretty linear.
FWIW, as someone who doesn’t really know anything about LCDs but has messed around with circuits and LEDs, the digression into PWM bridged the gap nicely for me at least. It probably wasn’t necessary from a technical point of view but it helped tell the story.
PWM still has its place - it's not uncommon for high-end sigma deltas to have a multi-bit rather than one-bit output, which feeds into a PWM final output stage. It helps with numerical stability. I've used a similar technique to improve the audio output in FPGA projects where a simple 1st order SD feeds a 5-bit PWM. On the boards in question the output drivers seem to be somewhat asymmetric, making an SD quite noisy at higher clock frequencies. With the PWM final output stage the same effect manifests as a DC offset instead, since (when not saturated) the PWM has a fixed number of rising and falling edges in a given time period.
OP comments on 'error diffusion dithering in 3d' in a few places...
If OP did that, but measuring and taking into account the slow rate of change of color of these pixels, then I think he could get far better results for watching a movie on the LCD, because it should get rid of much of the blurring for fast motion.
If the error signal is allowed to propagate both back and forward in time (which would require preprocessing), then you could probably reduce the blurring even further - effectively a pixel changing from one shade of grey to another can start the transition ahead of time.
This reminded me of the fantastic dev blog of Lucas Pope, the creator of games like Papers Please and The Return of the Obra Dinn. He has an entry on dithering his 3D game and how he managed to "stick" the dithering to the 3D objects.
The PWM greyscale is commonly used in other other display technologies like LED where controlling current is not feasible as well as Texas Instrument's DMD which is a micro mirror that can only be on or off.
Voltage-controlled current source is a pretty basic OpAmp circuit actually. There's also some tricks to turn some common voltage regulators into current controls. If your "information" is already encoded as current, you can somewhat current-mirror with a very basic BJT circuit (its not the best circuit, but if you don't care about accuracy its fine).
I'd say that current control is _more expensive_ than PWM though. PWM is just software or a 555 timer if you're oldschool, both are cheaper than OpAmps / Voltage Regulators.
Or maybe you mean, "not feasible for the costs people expect", which is probably true. PWM is just so much cheaper in practice.
It's "feasible" to use dynamic current sources per pixel, and many LED pixel drivers do offer this for dot correction / colour balance / global brightness, but PWM is almost always used for pixel values; it's much easier to achieve the necessary resolution staying in the digital domain, and there's not really any downside. The other big issue with current control is that the simple ways to do dynamic current are linear, so effectively use constant power regardless of pixel state, and also burn a lot of silicon area and might start creating thermal issues in the driver. At high power levels, current regulated switch mode DC-DC drivers start to make sense, but doing that per pixel is definitely not feasible.
When I needed a beeper with a time delay for a fridge door, I looked at retail prices for a dual-555 IC (a 556) and for a dual-opamp IC (ended up going with a 6002) and was surprised to find the opamp was something like three times cheaper, even taking into account the hefty capacitor for the time delay. (Active buzzers wouldn’t run off a 3V cell so I did need a dual IC for the delay and the square-wave generator.) Is this just a retail-specific distortion?
The last time I coded something along these lines was way back in the day for pulsing the Amiga's power/disk drive LEDs in time to music and sound effects, instead of their usual on/off/half brightness states. Not quite so useful, but I remember having fun coding it :D I was surprised at just how effective it was, without any noticeable flicker.
Interacting with hardware is on another level. I remember taking some introductory hardware level classes, and we began to simulate our circuits, just combinations of gates and what-not and you hit a weird signal pattern....
"Ohh, that's just a gate-glitch, we'll discuss that in later courses!"
The way that physics interacts with hardware is incredible, and the dance that we all do back and forth between hardware and software really strikes me as magical. I love it.
Wow, this is really impressive. I would have stopped at various types of dithering, but these techniques are cool and the results look like they really work!
I'm not a hardware person, but now I sort of want to try to simulate this myself to see what it looks like.
Since we have the whole video in advance (we can look forward in time) and we can also measure the response function of the pixels, it would be possible to preprocess the whole video to make it look even better (less ghosting).
The computational and engineering effort for these hacks to get to those results in production would have made those price sensitive consumer gadgets far more expensive. It just wasn't economically feasible.
Also, if I'm not mistaken, those hacks look good on static images but could produce nasty artifacts on moving images.
They actually gave an example that was, the Gameboy. I think in that case the big issue for going beyond just 4 gray levels is RAM and ROM space, and also potentially the speed of the PPU and display driver in dealing with the extra data.
You know, I had a cool hardware hacking idea I really wanted to do but both TFTs and eInk screens are prohibitively expensive. I just wanted a low resolution/dpi cheap monochrome display. Lack of demand apparently meant they cost more than normal colored lcds.
Is it just me? I look at the screenshot below the sentence
"Then bring in the noise-shaper." that shows the difference between the 1st-order noise shaper and the Sequence LUT and… i'm seeing only a very small difference!
Ah, thank you, the post says "But as you could see from the video progress bar, I am not finished yet" but doesn't embed the video or have a link to it anywhere that I could see.
AKA Bit banging PWM, never seen it working with this kind of displays, nice project, it can be done even with higher frequency signals too (requires an oscilloscope and a few tries to get a smooth signal).
I can't swear that nobody ever tried it, but such a demo would have been very impressive, and I can't imagine how it could have been done. Those original black-and-white Macintoshes had an 8 MHz processor, and the screen was 512x342 pixels; just performing a single cycle's worth of computation per pixel would limit you to 45 Hz. Even if you could somehow miraculously perform dithering in a single instruction per pixel, you'd have no CPU left to run the animation or do anything else.
Would it have provided any benefit? CRTs have continuously variable brightness. The bottleneck was having large enough video RAM to even store the higher pixel depth, the processing power to draw it, and the bandwidth to do the output.
Circuitry to convert a multi-bit value to a voltage doesn't seem to have been cost prohibitive. Even cheap devices (the predate the Mac) like the Atari 2600 could do that.
I'm actually not sure why they went 1-bit with the Mac. Maybe they felt having more pixels was so important that going monochrome was a reasonable trade-off.
The original NeXT had 2-bit greyscale (black, white, 2 shades of gray), and that looked pretty nice. It also had lots of pixels and was way more expensive.
There was some code allowing 12 bits RGB display on 8 bits palette screen mode, using very simple temporal dithering (PWM). For example, a demo for old versions of Allegro library. This blog post shows a way to improve on that.
https://en.wikipedia.org/wiki/Ordered_dithering
One is effectively posterizing a grayscale image, and his primary goal was to reduce artifacts drawing unwanted attention to the borders between poster levels.
With improvements in hardware other approaches to dithering took over. The last time I saw my father's pattern was on a DEC box logo. He moved to the color group at Kodak, and designed the "Bayer filter" used in digital cameras.