Dithering is still very common in rendering pipelines. 8 bits per channel is not enough to capture subtle gradients, and you’ll get tons of banding. Particularly in mostly monochrome gradients produced by light sources. So you render everything to a floating point buffer and apply dithering.
Unlike the examples in this post, this dithering is basically invisible at high resolutions, but it’s still very much in use.
Another place where dithering is useful in graphics is when you can’t do enough samples in every point to get a good estimation of some value. Add jitter to each sample and then blur, and then suddenly each point will be influenced by the samples made around them, giving higher fidelity.
I recently learned the slogan “Add jitter as close to the quantisation step as possible.” I realised that “quantisation step” is not just when clamping to a bit depth, but basically any time there is an if-test on a continuous value! This opens my mind to a lot of possible places to add dithering!
A lot of display hardware uses a combination of spatial and temporal dithering these days. You can see it sometimes if you look up close, it appears as very faint flickering "snow" (the kind you'd see on old analog TV). Ironically, making this kind of dithering even less perceivable may turn out to be the foremost benefit of high pixel resolutions (beyond 1080p) and refresh rates (beyond 120Hz) since it seems that raising those specs is easier than directly improving color depth in hardware.
Unlike the examples in this post, this dithering is basically invisible at high resolutions, but it’s still very much in use.