This is one of those things that's so difficult to explain to a mediocre to average graphic designer, much less a layperson. Now the trick would be to get them to read the whole thing...
> gifbuild -d colorBanding_255colorsOn1920pixels.gif | tail -n 200 | grep rgb | sort | uniq | wc -l
What happened here is some sequence of color space conversions with gamma correction added/removed that ended up compressing the darks.
A true 256 color gray gradient will not display so much visible banding. Just throwing together a naive 256 color gradient in Gimp reveals some RGB triplets with mismatched values suggesting the encoder is doing some odd color space conversion. Even then, it has much less perceptible banding when scaled with nearest neighbor to 1920.
I suspect most of the examples stem from improper gamma handling than any true demonstration of the limitations of 8-bit channels.
The markup loads images from seemingly local URLs:
Are you behind a firewall?
To the others, I did disable ublock and ghostery but that didn't do anything for me.
I've used the following in the past with a lot of success, for one-click fixing Photoshop banding.
24-bit images are 'good enough' and dithering would just add another step to the asset workflow.
I can remember the days with a 1MB video card and Windows 95 - switching between 16-bit color at 800x600, or 8-bit color at 1024x768. Resolution vs dithering...
But that’s currently only used by designers.
It's better than 24 bit color but doesn't solve the problem.
I might be a non-issue - either not noticeable or not distracting - and the real reason is that people just don't see banding as a problem or that the rendering overhead per-frame is too much.
As this article  shows, even with animated gifs you have often unstable dithering with artifacts somewhere in the image.
I thought this was intentional and expected the author to mention about it at the end of the video, but... nothing.
It would look better if the video had been uploaded at 1080p60 or something ridiculous, because that increases the bitrate budget, and the smaller video macroblocks cover less important image information. But that's kind of an awful hack.
You can observe this clearly with youtube clips of classic games, like those at TASVideos ( https://www.youtube.com/channel/UCFKeJVmqdApqOS8onQWznfA ) the source games are 480i or 240p at best, but the quality of the youtube 480p bitrate is significantly compromised in ways you can see. If you watch it in 1080p it looks much closer to what you'd see in an emulator (or off the hardware when connected via RGB/component.)
The examples in the article point to this obvious conclusion, but don't quite state it explicitly. The relationship between space and quantization effects is demonstrated for a physical dimension interpretation of space by the horizontal greyscale bar that stretches and contracts.
But what if you stretch and contract the dynamic range of your monitor itself? Each bit in the encoding space (naively) offers a doubling of dynamic range in the natural space, so even your 30 bit encoding can be stretched if you display it on a monitor that intends to output a contrast ratio many times greater than what we are used to.
For instance, imagine a monitor that could output light perceptually as bright as the afternoon sun, next to effectively infinite blackness. Will 30 bits be enough when 'stretched' across these new posts of dynamic range, or will banding (quantization) still be visually evident when examining a narrow slice of the space?
10 bits per channel will carry us for a while. Apparently Dolby bought BrightSide and now they are pushing for 12 bits. 16 bit ints will probably be enough for home use in practice. Internally, most games that do HDR rendering use 16 bit floats for their intermediate frame buffers. That format is popular in film production as well. I would be surprised if consumer display tech ever bothered to go float16-over-DVI. But, maybe it will get cheap enough eventually that we might as well have the best :)