Hacker News new | past | comments | ask | show | jobs | submit login
Blur Radius Comparison (bjango.com)
123 points by zdw on Jan 13, 2024 | hide | past | favorite | 41 comments



The point about gamma correction is important. Many effects like scaling and blurring should occur in what's called linear color space, otherwise you add or remove energy, which can look very wrong. Normal RGB isn't linear.

GPUs and most game engines get this correct, maybe because in animated 3D graphics, you'd notice this immediately. But browsers are ridiculously bad at understanding this -- there was an article a while back that demonstrated some pathological cases.


Browsers, chat apps, social media apps, Outlook and all other email clients, Teams, Zoom, Adobe Photoshop (really!), DaVinci Resolve (a colorist tool ffs)[2], and on, and on...

Developers assuming that RGB is a triplet of 8-bit numbers and that maths can be naively done to those numbers is one of my favourite examples of the Dunning Kruger effect.[1]

It's such a rarity to see a tool correctly implement color management or linear light for transforms by default that it always jumps out as far out of the norm.

> animated 3D graphics

You'd think so... but no. It's a persistent error in 3D games as well, as well as 3D renderers like Blender.

A common question posted in game-dev forums is "textures look brighter/lighter/faded in game compared to Photoshop". This is because most image editors use the sRGB non-linear gamma, but the game engine uses linear light because it's essentially a physical optics simulator. Beginner game developers load the sRGB 8-bit pixel values and then use them as-is without first undoing the gamma to bring them into the linear light space. This results in over-bright textures. The Quake engine and its derivatives were notorious for this kind of thing, made even worse by OpenGL accelerators like the 3dfs Voodoo card doing more incorrect things on top of that.

Similarly, "Cinematic" mode in Blender should be renamed to "not broken" or simply "correct" color management mode.

[1] I'm guilty of making all of the mistakes I've listed here, in a 3D engine, a ray tracer, and several other graphics utilities. We're all beginners at some point!

[2] The "best" video colorist tool on the planet is not color managed on Windows or Linux. It outputs "whatever" to the screen and ha-ha... good luck. On a HDR screen, with HDR on, if you enable HDR in Resolve you get a washed-out incorrect mess. That just blows my mind.


I occasionally do creative work that involves color. I've previously tried to understand color science concepts, starting with gamma, but I just haven't been able to develop an internal understanding of how this all works. Is there any content about the subject that you would recommend for someone like me?


I like this page because it shows one of the common issues (ugly gradient fills), explains the cause, and provides a solution usable in any web project: https://bottosson.github.io/posts/oklab/

The main thing to remember is that mathematical operations like blur and resize ought to be done in a linear space, but everything else needs to be non-linear in some way.

- Nonlinear gamma improves storage efficiency because it allocates bits more effectively “where it matters”.

- CRT displays had a nonlinear response curve as well, and this has been baked into standards like sRGB and all video standards.

- The human perception of color is also nonlinear.

These gamma curves are all generally different now, especially with modern HDR display pipelines.

So a correct pipeline is:

1. Undo the source curve into a linear format. E.g.: camera raw formats are usually logarithmic.

2. Do special effects like blending, sharpen, resize, Etc… in linear format.

3. Show grading controls that use a perceptual space to the human user.

4. Encode the output with the appropriate gamut and gamma for the output format, such as sRGB, or some HDR format. Tag it correctly.

As an example of what not to do, the chap that recently invented the “quite okay” image format[1] just ignored all this and encodes RGB 8-bit triples as-is with no indication of color space.

This is simple, clear, and wrong.

In the case of 3D graphics the RGB triples are a direct measurement of luminosity, and go to infinity. The common sRGB format goes from zero to one and represents the input to a function that maps that to an actual intensity. They’re not compatible! This is why mixing these in a game engine results in faded colours. Similarly, the linear output of a ray tracer can’t be sent to a screen as-is, it’ll look wrong.

Similarly ask yourself: which red, green, and blue? Each format defines these with differently! The “red” of Display P3 used by most Apple devices and HDR cinema projectors is very visibly different to the “red” of lsRGB!

These aren’t platonic ideals but points in space that can be measured, defined, and mapped into other color spaces.

[1] https://en.m.wikipedia.org/wiki/QOI_(image_format)


I hope this article series helps. My goal was to cover colour management in a way that everyone can understand. If it doesn’t, please feel free to ask questions (I’d love to know where the article fails).

https://bjango.com/articles/colourmanagementgamut/


To be fair the solution in Resolve to get "correct" colors is with decklink cards bypassing the entire OS graphics stack to deal with this. It's only somewhat recently that Resolve started getting support for getting a fullscreen view on a second monitor that's not a decklink.


Blackmagic is a hardware company that also has some software they sell on the side, which is why they've steadfastly ignored fixing their software in any way that would eliminate the need to purchase their over-priced video output cards.

My laptop can output 8K HDR @ 60 fps, which no Decklink card can achieve, at any price. I can get a "clean feed" by simply ticking the "Override to reference mode" checkbox in the NVIDIA driver. Several people have confirmed that this produces bit-accurate output indistinguishable from what a Decklink card outputs.

> fullscreen view on a second monitor

Yes, but it's still not color managed or even enabled for HDR under anything but MacOS! It just outputs the values as-is and lets the OS deal with it, which will treat it as Rec 709 SDR.


> You'd think so... but no.

Well, your first counter examples are beginner mistakes (obviously) and a graphics card that the younger half of HN readers have never heard of.

Blender getting it wrong would be inexcusable, though. Maybe I was overly optimistic.


Blender is a tool that gives you a lot of options and some of those would be "wrong", so it is partly up to the user to know what is correct. The defaults have been "correct" for a long time already though, at least regarding linear vs gamma space.


You’re conflating the tools with user errors. Photoshop, Resolve, and Blender aren’t doing it wrong because some users do it wrong. Gamma correction workflows are just prone to error, and sometimes people don’t know how to do it right. It’s not in general possible for the tool do handle it for you, the user needs to know the image’s history and sometimes has to convert between gamma encodings explicitly.

I don’t even know what you mean about Outlook and Teams, are you saying the text gamma is wrong? Or they display images wrong?

> Developers assuming that RGB is a triplet of 8-bit numbers and that maths can be naively done to those numbers is one of my favourite examples of the Dunning Kruger effect. [1]

RGB is a triplet of numbers, and you can do naive math on them, to convert to another color space for example. RGB in fact is a naive color space in the first place, not well defined, and does not have a defined gamma, so all math on RGB is de facto naive. sRGB is not RGB.

BTW, please don’t invoke Dunning Kruger snidely to suggest someone’s wrong. It’s a bad meme because DK is widely misunderstood, and suggesting someone’s an example because they’re wrong, ironically, perpetuates the misunderstanding. People making assumptions, or doing something wrong, or even being incompetent, are not examples of Dunning Kruger. People being over-confident is also not an example of Dunning Kruger: they showed in the paper that there is a positive correlation between confidence and competence for the tasks they measure. Therefore, you cannot know when something is an example of DK or not. The paper has rather poor methodology, it is full of priming and leading questions, including the title, and it went viral not because it’s right but because people love to believe the mistaken idea that someone who’s very confident might be incompetent, even though that’s not what the paper demonstrated. Also the Dunning Kruger effect probably doesn’t even exist, plus replication attempts have shown the opposite effects for more difficult tasks, like, say, engineering.

https://talyarkoni.org/blog/2010/07/07/what-the-dunning-krug...


>You’re conflating the tools with user errors. Photoshop, Resolve, and Blender aren’t doing it wrong because some users do it wrong.

I agree, but defaults also matter. I’d like our tools to not assume everyone’s an expert, which means the default should be to provide good results. Photoshop can definitely work with linear profiles, and the 32-bit mode is good when you need it.


Yeah, totally agree, defaults absolutely matter. And a bunch of the software @jiggawatts claimed handles gamma badly does in fact have reasonable defaults. The hard problem with images is that you need metadata, not just pixel values, and the metadata isn’t guaranteed to be there or to be accurate. It’s getting better quickly, I think, but it’s not Photoshop’s fault if the image doesn’t specify a color space or gamma, and not Photoshop’s fault if someone applies gamma first and then does a blur or paints a mask and then the blending isn’t what they expected.


Yep, yep, yep. Getting better quickly, but regressions are common, too.

(QuickLook on macOS hasn’t shown correct colours for a while now. FB12602125 and FB13475690 if anyone from Apple is reading this and wants to take a look!)


I agree but...huh? Chat apps? When are they dealing with linear color space? With video messages?


People send pictures to each other with chat apps. Think iMessage, WhatsApp, and Telegram. All of these will generate downscaled thumbnails. All of these have to deal with non-sRGB images from sources such as Apple devices that use Display P3 by default now. Etc…


Skype for business is hopefully dead but they used to do nearest-neighbor-downscaling of profile images. Even chat apps can do stuff wrong with images if they try hard enough.


Pasting UI images into team chat apps is pretty common, and the apps I’ve used have some pretty serious and ongoing issues. Not fun when others are making decisions based on those images.


> Figma’s blurs have added noise, and the noise is biased to make the result one 8-bit value darker. The noise repeats every 256×256px. I’m not sure why their background blur never reaches black, given the others do.

Photoshop will also add noise when you convert from 16 bit to 8 bit. The reason for doing this is to prevent visible banding when blurring things, or just working with gradients (like blue skies). This is really important when printing things, because sometimes banding that wasn’t visible on your screen will show up clearly in your prints. Also really important if you scale the colors of a gradient or blurred region later… small steps between colors become larger and there’s no way to get back to smooth.

I only became sensitive to whether software does blur math in higher precision, and whether it dithers, after working with prints and wasting $100 a pop on a large format print that came out with ugly unexpected banding. Last time I checked, Image Magick’s blur and 16->8 conversion was terrible, and has for a long time been one reason stuck in my head why I avoid Image Magick for serious work.

I don’t know how effective adding noise in 8 bits is. It’s better to work in a higher bit-rate colorspace, and only dither at the last second when converting the final to 8 bits. Well, it’s better to never use 8 bits if you can, but that’s not always possible. Hopefully Figma’s noise is at least added during the higher-precision blur operation before storing the results in 8-bit colors. Figma’s noise bias might be a simple mistake, and it might also be the reason they chose to clamp the blacks, if a developer was seeing -1s (which depending on the implementation, might underflow and suddenly show up as a saturated bright color).


>I don’t know how effective adding noise in 8 bits is. It’s better to work in a higher bit-rate colorspace, and only dither at the last second when converting the final to 8 bits.

Yes, I completely agree. Injecting white noise is a pretty terrible way to avoid banding, given there’s better dithering options available. The blur should be a shader, and the dithering could be done within the shader as long as it’s not error diffusion dithering. I don’t really see why there would be a big perf difference. I think even pattern dithering with a small kernel would give better results than white noise.


It is quite bad, but not that long ago the blur of css box-shadow was different between browsers, too. I wrote a little workaround that rendered a blurred rectangle on a canvas and then picked the alpha value at a certain point to calculate a correction factor.

In that case, Chrome was the culprit, and since I had mentioned that we were working around the problem, the Chrome devs originally refused to fix the issue, but eventually they did the only right thing and fixed it.

Lots of insights in the comments for this issue: https://bugs.chromium.org/p/chromium/issues/detail?id=179006


Another really annoying fact about drop shadows, is that Android choose to implement the light source pretty much where your nose is. So when a view with drop shadow moves closer to the bottom of the screen, its drop shadow increase visibility at the bottom of the view and decrease visibility at the top. This makes it's impossible to match specs given by designers on Android.


That's why you ask your designer to give you an elevation value instead of a blur radius and offset. If you really want a shadow with arbitrary color/radius/offset like CSS box-shadow, you have to use paint.setShadowLayer() in a software-rendered canvas (a bitmap or a software layer). Exception is that such shadows for text can be rendered with hardware acceleration.


Yeah, that's normal practice and usually works OK. But then sometimes it happens that the given elevation value give a shadow much weaker than in the design tool. Especially for things like a bottom toolbar where the top of the shadow is close to the bottom of the screen and supposed to visually separate the toolbar from content. Another toolbar using the same elevation in another place on the screen may look fine.

So when QA say that the shadow is too weak, you have to tell your designer this and they have to come up with something different, or you have to increase elevation to compensate, or fall back to a pre-rendered 9-patch, or something else... As i said, annoying :\


Can you show pictures or something of what you mean? I can’t find any mention of what you describe, and it would surprise me greatly to encounter a point light source rather than a distant light source, partly because of how fiendishly complicated it makes implementation (including just being impossible on the web), and partly just because we’re mostly used to distant light sources for illumination.


https://pasteboard.co/AUrdZZe5KxoO.png

In this screenshot all the views have the exact same spec: 8dp elevation.


OK, now I’m interested in whether there’s any x-axis cast (which would indicate a nearby point light source) or not (which would be simulating the physical impossibility of a light source both nearby and infinitely far away), and whether it’s rendering with actual perspective (without which it’s a cheap hack and physically impossible, but the difference is probably humanly indistinguishable, and honestly the whole thing is just a bit weird anyway given the orthographic top-down projection of the content, so maybe I shouldn’t even talk about the physical impossibility above).


Yes the same goes for X-axis as well. Here's a screenshot showing that: https://pasteboard.co/O5H4djIUMoG6.png


Cool, thanks. I guess https://graphicdesign.stackexchange.com/questions/80644/wher... must be obsolete, then (from the linked video: “We built a system that enforces that the light comes in at 45 degree angles; that helps keep the shadows consistent from the top to the bottom of the screen”). Wonder why they did that. Seems like the source is approximately above (50%, 0), though I can’t be bothered working out if that 0 is exactly the top of the screen.


Do you see this in all apps or just custom/particular ones? Can you control where the key light is at all?

It would be surprising for the key light to be so close to the screen (rather than nearer infinity), given that the design document linked elsewhere seems to expect consistency at particular elevations.


It's the behavior for all apps and I don't think it can be customized. There are some APIs to affect the spot shadow color and ambient shadow color (from API 28), but no way to affect this offset / light source position AFAIK.


I think they're referring to this:

https://m2.material.io/design/environment/light-shadows.html

Notice how there are actually two shadows that must be combined. The point light source shadow rendering depends on the object position on screen.


They’re two shadows, but nothing calls them point sources; I claim that any reasonable implementer would from such descriptions absolutely presume them to be distant (as in, never have it even occur to them it might be otherwise). The only detail that seems confusing and could support otherwise is the sentence “On the web, shadows are depicted by manipulating the y-axis only.”

(I’m using roughly SVG’s terminology: fePointLight has x, y and z, which can be approximately infinitely far away but it’ll only be approximately so, and feDistantLight, which has azimuth and elevation, and projects from infinity.)


A point light source can be at infinity. I don't think that design document implies that shadows need to change based on screen position. Note that "elevation" is about how far 'off' the background the element is pretending to be.


A designer complained to me that her Photoshop blur requirements did not match my iOS implementation. I had to manually adjust my code until it met precisely, which was a pain. A few years later, she became a developer and admitted she was wrong to complain about exact matching between platforms.


And now you have some scaling factor values to match things easily.


Interesting, because at early CSS3 times, I always had the impression that Firefox had much nicer renderingof shadows etc. than Chrome, which looked rather cheap and fast. Same for font-faces.

They're probably aligned better now.


box-shadow on Chrome used an incorrect blur radius until about five years ago. Then they finally fixed it.


I wonder if that was on purpose - larger blur radii are generally more expensive to render.


Are there differences between implementations doing optical vs srgb blur?


SVG filters default to linear color (but svg gradients etc default to srgb) so the 'SVG fe' are all linear. Color space used changes where the midpoint is so nothing else here uses linear color (unless they've decided to fudge that to match user expectations).


Too bad there is no support for lens blur.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: