Really nice article! Succinctly demonstrates the problem with not using premultiplied alpha.
> As an Artist: Make it Bleed!
> If you’re in charge of producing the asset, be defensive and don’t trust the programmers or the engine down the line.
If you are an artist working with programmers that can fix the engine, your absolute first choice should be to ask them to fix the blending so they convert your non-premultiplied images into premultiplied images before rendering them!
Do not start bleeding your mattes manually if you have any say in the matter at all, that doesn't solve the whole problem, and it sets you up for future pain. The only right answer is for the programmers to use premultiplied images. What if someone decides to blur your bled transparent image? It will break. (And there are multiple valid reasons this might happen without your input.)
Even if you have no control over the engine, file a bug report. But in that case, go ahead and bleed your transparent images manually & do whatever you have to, to get your work done.
Eric Haines wrote a more technical piece on this problem that elaborates on the other issues besides halo-ing:
I'm not sure I understand your concern. If the software converts all your assets into a premultiplied form, the bleeding you applied won't hurt anything even if it doesn't help. Yes, it's extra work that shouldn't be necessary - but we often find ourselves living in an imperfect world.
I completely agree that premultiplied alpha should be used everywhere. I'd even go a step farther and say that you should use high bit depth linear values too, but that's a topic for another day.
Just pointing out that attempting to solve the real problem should be tried first, before jumping to work-arounds.
If the software converts to pre-multiplied, then there'd be no halo problem and no bleeding necessary, right?
You're right that bleeding won't hurt assets if you're comping, but it will hurt assets if you're not converting to premult and then do texture filtering or mipmapping or blurring.
My concern is with using bleeding is the article's suggestion to use bleeding as a first resort, rather than a last resort (as an artist). It's a hack that totally works in a lot of cases, but it's still a hack. I've watched artists in film and games use random combinations of bleeding, (un)pre-multiply, gamma, and other stuff whenever something goes wrong with matting, and often it's not the right solution. A lot of people are scared of understanding premultiplied alpha - and the technical name isn't doing anyone any favors - and instead of figuring out the right solution they try every combination of hacks until it works. General misunderstanding and superstition about premultiplied alpha is the most common reason I've seen for people using un-premultiply nodes in production.
A 16bpp linear pre-multiplied format would be awesome. sRGB is a pain to use in practice:
- Slow to convert to linear colour space (requires a pow).. except
- GPUs use an 8-bit lookup table to convert input sRGB to output linear value.
- This doesn't work for more than 8 bits as the table gets excessively large very quickly.
It's a pity PNG doesn't have flag to mark the image data as being pre-multiplied.
That little ditty is also very likely why PNG has zero uptake in visual effects and CGI.
Associated (aka premultiplied) alpha is the _sole_ means to embody both occlusion and emission. Unassociated (aka straight or key) alpha cannot represent these facets.
Consider a candle flame that exists as mostly emission and low to no occlusion. With associated alpha, you can use zero alpha triplets with non-zero emission RGB to represent this real-world scenario. With unassociated alpha? Impossible.
> That little ditty is also very likely why PNG has zero uptake in visual effects and CGI.
The main reason why CG studios can't use PNG is because both renderers and compositors always output pre-mult images. Yes you can choose to un-pre-mult them after rendering, but the native result of a blending operation is always a pre-mult color, regardless of the input sources. Unpremultiplied blending still results in premultiplied colors.
Since everyone knows that un-premultiplying (dividing) is to be avoided at all costs, it means you can't render or comp something and then save the file in PNG.
> Associated (aka premultiplied) alpha is the _sole_ means to embody both occlusion and emission.
I would suggest avoiding thinking of image colors as emissive. That's a material property, and using RGBA to encode material properties is only something you'd do if you were stuck in a weird fixed-function pipeline with no choice, or if you were really really low on disk space. Otherwise, emission colors go in their own separate emission channel that doesn't have an alpha value.
Feel free to reference the original Porter Duff paper regarding "luminescent pixels" or Alvy Smith's opinion on the matter as relayed by Zap Andersson in the legendary Adobe thread.
Remember that a ray tracing engine uses associated alpha as that is the sole format it can generate. Only associated alpha models emission and occlusion.
Yes you're right; I wasn't arguing with you about that. I always feel like calling it additive rather than emissive. But just because you can represent additive colors in premult images doesn't mean you should, and I'd speculate wildly that it occurs less often in production than halo problems. Someone who writes lens flare and rainbow shaders is going to scold me for saying that though...
In the context of the OP's article, and of artists who paint images with transparency in them, worrying about emissive colors isn't really an issue. Artists very rarely paint pre-mult images, they can't work with premult images, generally speaking. No doubt a few people who know what they're doing do it, but I can't personally say I've ever seen an artist painted premult image with emissive colors, nor do I recall ever seeing a software rendered layer with emissive colors either. Is this common now? I've been out of film & games for a few years now.
I'm already familiar with everything you referenced; and I can vouch that it's all very good stuff so thanks for sharing, especially the Adobe thread. I hope others here benefit. It's amusing that an entire industry knows who Chris Cox is because of this thread, right? :P
> Consider a candle flame that exists as mostly emission and low to no occlusion.
I don't think associated alpha helps much here. You can special case pixels that are doing pure emission, but when a pixel is doing both you need the lighting to affect the color of the occlusion but not affect the color of the emission.
But that means you need to dedicate specific objects to being purely emissive, to avoid blurring at the boundaries.
And once you've separated the objects, you don't really need to have purely-emissive textures and non-emissive textures in the same file, with the same exact format. You might as well store emissive textures as RGB and save on memory.
Using a different format can even benefit you. You're less likely to accidentally blend emissive and non-emissive pixels, and you're less likely to accidentally apply lighting calculations to emissions.
Except associated alpha models emission and occlusion via the operation. You don't need zero alpha either, as any ratio can occlude partially and emit partially as well.
You can't do both in a single pixel, unless you turn off lighting entirely and make everything fullbright.
Let's have blue-tinted pane of glass, (0, 0, .5, .5). And a red tinted pane, (.5, 0, 0, .5).
Then a blue glow, (0, 0, .5, 0). And a red glow, (.5, 0, 0, 0).
If you have your blue glass glow red, and your red glass glow blue, both combinations come out as (.5, 0, .5, .5).
Under 99% of lighting conditions, it will look wrong. If you put it in darkness, it will look overwhelmingly wrong.
Even if occlusion and emission are the same color it doesn't work. A dark blue object that glows brightly, and a bright blue object that glows dimly, both will have the same RGBA.
Objects that block a certain amount of light, and then emit a certain amount of light: You can only make that simplification if the entire world is evenly lit by white light.
Blur the entire matchstick into one pixel. Under white light it has to emit brown plus yellow. In a dark room it has to emit just yellow.
It's a clever technique but I can't figure out any way it's not fundamentally incompatible with having lighting. If one pixel has to both be lit and emit extra light, you need two RGB values.
RGBA in associated is unique in that it represents distilled geometry. If you were to zoom in on a pixel that had a degree of geometry occluding it, the coverage is represented by the alpha ratio, while the RGB is purely emission.
It is complimentary to lighting.
The math works because of the differing forms of the alpha over formula. FG.RGB + ((1.0 - FG.Alpha) * BG.RGB), resulting in a pure add at the extreme case of alpha being zero, or ratios of addition when non-zero.
Within the limitations of the RGB model, it works extremely well.
The folks cited are extremely adept imaging people, covering years of experience and several Academy Achievement Awards.
They're plenty smart but they're talking about a different use case.
You don't know what color will be emitted unless you know what light is hitting the surface.
A dark glowing surface and a bright non-glowing surface have the same emissions under white light, but different emissions under other kinds of light. The method you're talking about requires the emissions be precalculated, which means you can't apply lighting at runtime.
I agree with you, to do it properly you need two different RGBA objects, the emissive one will have A=0. But the same format suffices to represent both.
I've been throwing around the idea of making a new intermediate file format which can contain only pre-multiplied transparency. But, do you know any file format already supports this?
TIFF can do premultiplied alpha. The problem with TIFF is that it's such a grab bag of options, it makes it difficult to know what you're dealing with.
I had to do a similar thing with the Gitlab logo[0] (and it's still messed up on their marketing sites). There are limitations to the SVG renderers in browsers (and elsewhere), it is kinda sad that we don't have AGG-quality rendering as a standard feature of browsers these days. :- (
... which reminds me of similar notches from typography at small sizes. Interesting how physical bleeding ink and rendering issues result in similar workarounds.
Yeah! I recently built a website that heavily uses Illustrator assets exported as svg. I was happy to have read that explanation before pulling my hair out over weird artifacts.
> Even with an alpha of 0, a pixel still has some RGB color value associated with it.
Wish the article was more clear as to why this happens. Let me elucidate: this happens because, per the PNG standard[0], 0-alpha pixels have their color technically undefined. This means that image editors can use these values (e.g. XX XX XX 00) for whatever -- generally some way of optimizing, or, more often than not, just garbage. There are ways to get around this by using an actual alpha channel in Photoshop[1], or by using certain flags in imagemagick[2].
While what you wrote is correct, it's not actually the problem being described in the posted link. The problem in the posted link is just as relevant if the alpha value was "01" for "you pretty much can't see it but it's meaningfully there", and has to do with image filtering artifacts as opposed to purely data representation.
But it's specifically a problem with PNG because the spec explicitly allows encoders to substitute their own RGB values for any transparent pixel. Most other formats are expected to store the values you give them without modification.
No, the problem discussed in the article can occur with any image format (that supports alpha) and occurs later in the pipeline so it really doesn't matter how the texture was stored on disc. Or even whether it was stored - this could happen with procedurally generated textures.
The article discussed one possible workaround, which was to specify RGB values for the fully transparent pixels in the artwork. If the PNG tool you're using substitutes its own RGB values in those transparent pixels, as allowed by the spec, the workaround doesn't work. That's why PNG in particular is a problem.
If you follow the recommendation at the end to use premultiplied alpha for all computations, this becomes a moot point.
I remember this giving me headaches years ago when I tried to save color data in a PNG with 0-alpha values that contain color. I needed the color and the alpha and thought it would make sense to save both in one image, basically, but nope. This was with GIMP and I remember looking it up and the GIMP devs insisting it was correct behavior by the PNG standard even when it made oh-so-much sense to at least give an option to bypass this shit for niche cases.
fought with this recently while creating pdf having image overlays, ended up having to draw the underlining image on top image's pixel having full alpha to avoid aliasing on the edges
This is extremely useful to take advantage of (that you can store RGB values in 0-alpha pixels). I've written some pretty simple but powerful shaders for a game I'm working on by utilizing transparent pixels' "extra storage" which allowed for either neat visuals or greatly reduced the number of images required to achieve a certain affect. For instance, I wrote a shader for a characters hair that had source images colorized in pure R, G, and B and then mapped those to a set of three colors defining a "hair color" (e.g. R=dark brown, G=light brown, B=brown). If I didn't have the transparent pixels storing rgb nonzero values, the blending between pixels within the image would jagged and the approach would have been unacceptable for production quality leading to each hair style being exported in each hair color. As a total side note I really enjoyed the markup on the website. Seeing the matrices colored to represent their component color value is really helpful for understanding. Nice job author!
Paint Shop Pro had some picture frames that were fully transparent in the middle. As an Easter egg, a couple of those frames had pictures of the staff in those transparent areas and you could use the unerase tool to see them.
I'm a bit confused about your explanation. Are you using the transparent pixels to store extra data, or are you using the alpha channel to store extra data? And if you're using the transparent pixels, then what data is it and how do your shaders know how to find the transparent pixels?
I don't like this article because it blames the wrong people and buries the real solution, premultiplied alpha, at the bottom. Already there are many comments here that are confused because they didn't even see the premultiplied alpha part of the article.
The issue with the Limbo logo was not that the source image was incorrect. The image was fine. The blending was incorrect because the PS3 XMB has a bug. Not using premultiplied alpha when you are doing texture filtering is a bug.
I don't think that's a fair summary of the layout of the article. Once the article reaches the "How to Prevent This Issue" section there's far more space given to using premultiplied alpha than manually bled images, and it's not blaming anyone. It just tells an artist how they could fix it and tells programmers how they could fix it. Nobody is blamed and the "correct" solution isn't buried at all.
Exactly. As I was reading the article I was thinking to myself 'this guy needs to learn about pre-multiplied alpha', and then he gets to the end and says, btw use pre-multiplied alpha to avoid this problem entirely.
Premultiplied alpha results in less color depth, though. If my alpha is 10%, then my possible RGB values become 0-25. Even if I multiply by 10, I still lose the maximum possible values 251-255, and only values 0, 10, 20, 30... 250, are possible.
The correct solution is to pay close attention to all of the factors... and to be ESPECIALLY aware of pixel scaling. Provide your RGBA textures at the 1:1 pixel scale they will be rendered (or higher!) if at all possible.
> Premultiplied alpha results in less color depth, though.
That doesn't matter unless you color-scale the image (like multiply by 2 to make it brighter) before displaying it. Otherwise, the depth is at the correct resolution for display.
And premultiplied alpha should be used for final display, not just for the halo-ing reasons demonstrated here, but for lots of reasons.
Artists should generally be working in un-premultiplied alpha though, and the premultiplication is something that should happen right before an artist image is used. Artists shouldn't work in premultiplied images (and they generally don't) because of the color depth issue, and because it's crazy to paint premultiplied transparency manually.
> that doesn't matter unless you color-scale the image (like multiply by 2 to make it brighter) before displaying it.
It does if you stack several image layers. Let's say I have a particular color tone. Then I use that as background color. And I also stack 10 layers of the same color with alpha 0.05 on top of that. If you use premultiplied colors then this will actually result in a different color.
Due to rounding those colors often tend to be more greyish too. So if draw some vector graphics and have multiple basic shapes with semi-transparent edges (aliasing!) stacked on top of each other you can get some ugly fringes.
Yeah true, I suppose there is rounding error. Is this something that has actually happened to you, or are you saying it's a problem in theory? I'd be hard pressed to come up with a real-world use case for 10 of the same color being comped. For this to be a problem, the visible elements being comped would have to be exactly the same color, without any gradient or noise at all...
Even if it did happen, the error is bounded - with 10 layers of the same color, the maximum error in any channel is 5, and the average error is 2.5. It's pretty hard to say it would be wildly and noticeably different to most people even with 8 bit color channels, but I certainly have met some film directors and CG supervisors who were very color sensitive. It would be literally invisible in anything higher than 8 bits.
I'm curious -- why do you say the rounded colors would tend toward gray? Rounding error can happen in both directions, so I would expect rounding errors to cause a uniformly distributed error -- some colors would get slightly more saturated, some less, some of them would shift hue, and some would be unaffected.
You lost me on the edges part & aliasing. Rounding errors will not be visible as fringes, so if you're seeing fringes, you have some other problem... perhaps failure to pre-multiply! ;) Tell me more about stacking shapes and getting fringes. Is this stacking 10 of the same shape in the same place, or at the edge crossings of different shapes? What kind of fringes are you seeing?
Anyway, I've never witnessed a case in 20 years of film & game production where rounding errors caused a visible, detectable problem, but I'd love to know if there real cases where it's an issue!
> Is this something that has actually happened to you
In a toy project where I tried to automatically generate SVG shapes and rendering them to a html canvas (which uses 8bit per channel premultiplied alpha). The assembled shapes occasionally overlapped and it did lead to visible color inconsistencies.
It's also a problem when working with PNGs. When you put your PNG pixels into a premultiplied space and pull them out again you actually lost information to rounding, which negates the benefits of a lossless format.
> You lost me on the edges part & aliasing. Rounding errors will not be visible as fringes, so if you're seeing fringes, you have some other problem... perhaps failure to pre-multiply! ;)
Well, if you got a solid shape then there's no alpha. But the aliasing at the edges introduces partially transparent pixels. If you then pull out the pixel data and apply it to a different canvas you get mismatched colors.
I only spent a few hours on it, so I don't recall all the details, but my conclusion was that it is inadequate for image manipulation since it is lossy and lacks precision.
The article suggests why 1:1 won't be adequate, there will still be interpolation when the position isn't on a pixel boundary.
In practical applications the reduced color gamut isn't a problem, your ability to discriminate the colors goes down as the transparency goes up. It would only be a problem if you were trying to convert the pixels to something less transparent than they started.
If you were just doing image compositing would this be an issue? I can easily imagine how that would adversely effect a 3d game where the images are used as texture maps and the values of the diffuse texture map may change from other lighting contributions and shaders. It seems like in that case doing some kind of HDR/higher precision texturing would be good right?
Disclaimer: not a graphics programmer just a hobbyist seeking clarification. :)
In practice it's the other way around. Pre-multiplying alpha adversely affects image editing more than 3d rendering.
For image editing, the problem is that you might send the same pixel through many different operations, and if you flatten something and further manipulate, then the rounding error will continue to accumulate. So losing granularity early in the process due to alpha can result in accumulated rounding error.
In 3d graphics, the texture typically goes through a very predictable and short number of transformations, and they will rarely need to "stretch" the color range of the transparent pixel. When you alpha blend with pre-multiplied alpha, you're literally just adding the whole color value. And then you're slapping a whole lot of other colors onto it in the lighting/shadow/other passes, so the subtle nuances of that mostly transparent window get lost in the bustle.
Interesting! Thanks for clearing that up. I suppose if you have a strange shader that does some non-linear stuff it also might be more obvious to the programmer to use something higher precision or tweak how the math is done.
If you need to scale that up for some reason or are working with 10-bit displays then just use RGBA16F textures. No reason to stick to 32-bit here. GPUs are perfectly happy to work with half-float textures.
Although if you want to keep 32-bit textures you also shouldn't be doing linear premultiplication like the article suggests, you should be using sRGB instead.
Entirely false in that it is indeed a completely valid combination. See luminescent pixels in the Porter Duff paper. It represents a pixel that has no occlusion and is emitting.
You also have a similar problem when you render opaque, rectangular images without the clamp edge mode, and the renderer is in tiling mode, so the borders wrap around when your picture is halfway between pixels and become a mix between the top/bottom or left/right colour, corrupting the edges. Easy to fix, but annoying until you get what it is that corrupts your edges.
Also: "The original color can still be retrieved easily: dividing by alpha will reverse the transformation."
C'mon, you can't say that and then make an example with alpha=0. Do you want me to divide by zero? The ability to store values in completely transparent pixels is lost.
It would be more accurate to say that the original color can be approximated. The approximation quality goes down as the transparency goes up, until at zero it gets lost entirely.
You're right. I didn't want to mention precision because it's a whole other can of worms, but one worth knowing how to deal with. I was looking at it from a mathematician's point of view, using rational numbers.
While reading this article, it struck me that the amount of "useless" data increases as the alpha value approaches 0. For example: in a pixel with rgba values of (1.0, 0.4, 0.5, 0.0), the rgb values are redundant. Is there a color format that would prevent this redundancy? Perhaps by some clever equation that incorporates the alpha values into the rgb values? I don't think Premultiplied alpha would work, because you still need to store the alpha value for compositing later...
Would you count compression schemes as valid answers to your question, or are you asking about the raw data not having any redundancy?
There're no channel formats or clever equations I'm aware of that avoids the redundancy part. But your question totally reminds me of Greg Ward's RGBE format, which is a high-dynamic range format stored in 8 bits per channel, with an extra 8 bit exponent channel. http://www.graphics.cornell.edu/~bjw/rgbe.html
RGBE isn't doing exactly what you're asking about, but it's similar in a way. Instead of storing 16 bits per channel separately for each channel, what you really get instead is 16 bits (sort-of) for the brightest channel, and the other 2 channels in 8 bits each - discarding the extra bits. You can't see them because the bright channel will prevent you from seeing anything super dark in another channel, so you can discard the extra color resolution in the darker channels.
If you count compression, then pre-multiplying would help the situation. Anytime the alpha value gets low or goes to 0, the color channels do too, so run length encoding or DCT or whatever else will collapse large transparent areas into all zeroes.
PNG uses compression to reduce the redundancy. The average number of bits per pixel is much less than 32. A=0 is a special case, and many PNG encoders will take advantage of that by setting RGB to some constant so it compresses better.
In premultiplied alpha, you can think of all the legal (non-emissive) colors as a triangle. At an alpha of 1, all colors are legal, tapering off as alpha decreases until you reach an alpha of 0, where no colors are legal. To turn the triangle into a square, you could scale it up by two, cut off the top, and rotate the top over into the empty space (like this: https://i.imgur.com/6hjNe7X.png). That would give you twice the colors you had before.
Premultiplied alpha is also more "correct" in that it separates how much each pixel covers things behind it (the alpha value) from the amount of light it is reflecting or emitting (the color values). These two values should really be interpolated separately, and that's what premultiplied alpha gives you.
This is actually a very common problem with 3-D stuff and transparency in textures. This isn't an issue with the colors of the pixels themselves, it's an issue with texture filtering. nVidia has a pretty good explanation as it applies to games and 3d graphics: https://developer.nvidia.com/content/alpha-blending-pre-or-n...
Say you have two adjacent pixels using floating point RGBA values of (0,0,0,0) and (1,1,1,1), and you apply it to a 3-d shape. Because of the rasterization algorithm, you will be sampling weighted averages of the two pixels, either because you're scaling up and need to interpolate, or because you're scaling down and need to average.
The average of (0,0,0,0) (fully transparent) and (1,1,1,1) (opaque white) is (0.5,0.5,0.5,0.5), a half transparent gray. But you'd intuitively expect (1,1,1,0.5), half transparent white. This is the essence of the problem. The fix is to make sure that your transparent pixel was (1,1,1,0) and not (0,0,0,0).
You're mostly right. The industry-wide accepted answer is to multiply the opacity into the colors before interpolating, so the formula would be (R1xT1 + R2xT2)/2 for the average, and then to do later transparency blending as if the opacity term was already multiplied in.
Which means that your output pixel cant be both white and low transparency. I guess its a typical graphics 'close enough and better performance' outcome (where mine is marginally more difficult to calculate and needs some more logic to avoid divide by 0)
> Which means that your output pixel cant be both white and low transparency.
Did you mean low opacity?
If so, that's not quite right. (0.1,0.1,0.1,0.1) premultiplied is the same color as (1,1,1,0.1) "normal." They're both white and low opacity, just in different representations. You don't actually lose much granularity because the graphics card has to multiply the color channels by the opacity value sooner or later.
Separately, your formula doesn't work for interpolation. It works for averaging, but in order to do texture sampling, you need interpolation, so your formula can't actually be used unless you can adjust it to deal with interpolation gracefully.
Thanks for the answer - I did not know of this premultiplacation. This makes it effectively the same or very close? Assuming output transparency/alpha is (T1+T2/2), dividing by this gives the difference
I don't quite get your point on interpolation, but I'll look up when I have the chance
A better interpolation would do the premultiply for you automatically to get the proper result. Since that requires a couple of extra multiplies and a divide, it gets skipped most of the time.
Each pixel is defined by 32 bits -- 8 R, 8 B, 8 G, 8 A. Even if alpha is 0, there has to be information stored in Red, Green, Blue. There can't be "no information", because then a pixel is not defined by 32 bits of information. There is always a "color" (RGB values) for transparent pixels. Some editors will set the RGB values to 0 or 255 when fully-transparent pixels are saved/output. Others don't.
As the author mentions, certain resampling algorithms might be naive or plain ignorant about how to resample images with potentially transparent pixels. Should transparent pixels not count to the final pixel? Should all pixels be averaged? Should the output pixel be the median or mode value? If the image is being resampled to 1/3 its size the resampling can be very cheap if only the middle of the 3x3 pixel cluster is selected as the output value.
"And be careful to export the RGB values of transparent pixels when you save to PNG for example, many programs will by default discard transparent pixel RGB data and replace it with a solid color (white or black) during the export to help with the compression."
Here is some information for Photoshop and a plugin you can use:
Regardless of how you perceive making it, the representation is still (nearly always) a color underneath an alpha channel.
If you inspect the image in a full-featured editor like Photoshop or GIMP, you can inspect the channels individually or remove the transparency entirely to see this fact.
I think with most GUI programs, the eraser and cut tool will leave color as what it was before it became transparent.
EDIT: saurik has a good point that I forgot about -- many editors may actually throw the colors away when you export unless you ask them not to.
An eraser tool and a paintbrush tool do essentially the same thing—overwrite an area of the image with new pixel values, typically blended in some way with the old values. It’s just that the eraser is for painting in the alpha channel, while the paintbrush also affects colour channels.
An alpha mask (on a layer with no alpha channel) is essentially a different way of viewing & editing the same data, and there you probably have no logical trouble with using a paintbrush tool.
When you erase, it has a soft edge, right? And in the edge, you see the colour that was there before is still there, just more transparent. Well similarly, in the area that is fully erased, it is fully transparent.
This is also relevant for CSS: Some browsers (I think Safari) treat "transparent" as "transparent black" for gradiets, so "linear-gradient(transparent, white)" will result in unexpected grayish parts in the gradient. As a workaround, one needs to use "linear-gradient(rgba(255, 255, 255, 0), white)" instead.
Note that SVG 1.1 doesn't have an option for color interpolation to work in premultiplied/associated alpha. SVG 2 is not finalized though, I added an issue some time ago.
Used to experience this all the time when making maps with custom textures for older games... Lots of people sure didn't though. Especially source ports that would apply filtering to games that didn't have any in the first place and you'd see blue or purple outlines because their original formats were obviously paletted
> As an Artist: Make it Bleed!
> If you’re in charge of producing the asset, be defensive and don’t trust the programmers or the engine down the line.
If you are an artist working with programmers that can fix the engine, your absolute first choice should be to ask them to fix the blending so they convert your non-premultiplied images into premultiplied images before rendering them!
Do not start bleeding your mattes manually if you have any say in the matter at all, that doesn't solve the whole problem, and it sets you up for future pain. The only right answer is for the programmers to use premultiplied images. What if someone decides to blur your bled transparent image? It will break. (And there are multiple valid reasons this might happen without your input.)
Even if you have no control over the engine, file a bug report. But in that case, go ahead and bleed your transparent images manually & do whatever you have to, to get your work done.
Eric Haines wrote a more technical piece on this problem that elaborates on the other issues besides halo-ing:
http://www.realtimerendering.com/blog/gpus-prefer-premultipl...