
Beware of Transparent Pixels - tsemple
http://www.adriancourreges.com/blog/2017/05/09/beware-of-transparent-pixels/
======
dahart
Really nice article! Succinctly demonstrates the problem with not using
premultiplied alpha.

> As an Artist: Make it Bleed!

> If you’re in charge of producing the asset, be defensive and don’t trust the
> programmers or the engine down the line.

If you are an artist working with programmers that can fix the engine, your
absolute first choice should be to ask them to fix the blending so they
convert your non-premultiplied images into premultiplied images before
rendering them!

Do not start bleeding your mattes manually if you have any say in the matter
at all, that doesn't solve the whole problem, and it sets you up for future
pain. The only right answer is for the programmers to use premultiplied
images. What if someone decides to blur your bled transparent image? It will
break. (And there are multiple valid reasons this might happen without your
input.)

Even if you have no control over the engine, file a bug report. But in that
case, go ahead and bleed your transparent images manually & do whatever you
have to, to get your work done.

Eric Haines wrote a more technical piece on this problem that elaborates on
the other issues besides halo-ing:

[http://www.realtimerendering.com/blog/gpus-prefer-
premultipl...](http://www.realtimerendering.com/blog/gpus-prefer-
premultiplication/)

~~~
mark-r
I'm not sure I understand your concern. If the software converts all your
assets into a premultiplied form, the bleeding you applied won't hurt anything
even if it doesn't help. Yes, it's extra work that shouldn't be necessary -
but we often find ourselves living in an imperfect world.

I completely agree that premultiplied alpha should be used everywhere. I'd
even go a step farther and say that you should use high bit depth linear
values too, but that's a topic for another day.

~~~
ioquatix
A 16bpp linear pre-multiplied format would be awesome. sRGB is a pain to use
in practice:

\- Slow to convert to linear colour space (requires a pow).. except \- GPUs
use an 8-bit lookup table to convert input sRGB to output linear value. \-
This doesn't work for more than 8 bits as the table gets excessively large
very quickly.

It's a pity PNG doesn't have flag to mark the image data as being pre-
multiplied.

~~~
foolrush
That little ditty is also very likely why PNG has zero uptake in visual
effects and CGI.

Associated (aka premultiplied) alpha is the _sole_ means to embody both
occlusion and emission. Unassociated (aka straight or key) alpha cannot
represent these facets.

Consider a candle flame that exists as mostly emission and low to no
occlusion. With associated alpha, you can use zero alpha triplets with non-
zero emission RGB to represent this real-world scenario. With unassociated
alpha? Impossible.

~~~
dahart
> That little ditty is also very likely why PNG has zero uptake in visual
> effects and CGI.

The main reason why CG studios can't use PNG is because both renderers and
compositors always output pre-mult images. Yes you can choose to un-pre-mult
them after rendering, but the native result of a blending operation is always
a pre-mult color, regardless of the input sources. Unpremultiplied blending
still results in premultiplied colors.

Since everyone knows that un-premultiplying (dividing) is to be avoided at all
costs, it means you can't render or comp something and then save the file in
PNG.

> Associated (aka premultiplied) alpha is the _sole_ means to embody both
> occlusion and emission.

I would suggest avoiding thinking of image colors as emissive. That's a
material property, and using RGBA to encode material properties is only
something you'd do if you were stuck in a weird fixed-function pipeline with
no choice, or if you were really really low on disk space. Otherwise, emission
colors go in their own separate emission channel that doesn't have an alpha
value.

~~~
foolrush
It is emissive.

Feel free to reference the original Porter Duff paper regarding "luminescent
pixels" or Alvy Smith's opinion on the matter as relayed by Zap Andersson in
the legendary Adobe thread.

[https://forums.adobe.com/thread/369637](https://forums.adobe.com/thread/369637)

Here is a sample of the candle referenced by Zap:

[https://i.imgur.com/bnXyAl1_d.jpg?maxwidth=640&shape=thumb&f...](https://i.imgur.com/bnXyAl1_d.jpg?maxwidth=640&shape=thumb&fidelity=high)

Remember that a ray tracing engine uses associated alpha as that is the sole
format it can generate. Only associated alpha models emission _and_ occlusion.

Further reading:

[http://lists.openimageio.org/pipermail/oiio-dev-
openimageio....](http://lists.openimageio.org/pipermail/oiio-dev-
openimageio.org/2011-December/004709.html)

[https://groups.google.com/forum/m/#!topic/ocio-
dev/ZehKhUFqh...](https://groups.google.com/forum/m/#!topic/ocio-
dev/ZehKhUFqhjc)

~~~
dahart
> It is emissive.

Yes you're right; I wasn't arguing with you about that. I always feel like
calling it additive rather than emissive. But just because you can represent
additive colors in premult images doesn't mean you should, and I'd speculate
wildly that it occurs less often in production than halo problems. Someone who
writes lens flare and rainbow shaders is going to scold me for saying that
though...

In the context of the OP's article, and of artists who paint images with
transparency in them, worrying about emissive colors isn't really an issue.
Artists very rarely paint pre-mult images, they can't work with premult
images, generally speaking. No doubt a few people who know what they're doing
do it, but I can't personally say I've ever seen an artist painted premult
image with emissive colors, nor do I recall ever seeing a software rendered
layer with emissive colors either. Is this common now? I've been out of film &
games for a few years now.

I'm already familiar with everything you referenced; and I can vouch that it's
all very good stuff so thanks for sharing, especially the Adobe thread. I hope
others here benefit. It's amusing that an entire industry knows who Chris Cox
is because of this thread, right? :P

------
tantalor
Reminds me of "Is there a reason Hillary Clinton's logo has hidden notches?"
[https://graphicdesign.stackexchange.com/questions/73601/is-t...](https://graphicdesign.stackexchange.com/questions/73601/is-
there-a-reason-hillary-clintons-logo-has-hidden-notches)

~~~
zeckalpha
... which reminds me of similar notches from typography at small sizes.
Interesting how physical bleeding ink and rendering issues result in similar
workarounds.

~~~
minikites
For the curious:
[https://en.wikipedia.org/wiki/Ink_trap](https://en.wikipedia.org/wiki/Ink_trap)

~~~
zeckalpha
Thanks! I couldn't remember the term.

------
dvt
> Even with an alpha of 0, a pixel still has some RGB color value associated
> with it.

Wish the article was more clear as to _why_ this happens. Let me elucidate:
this happens because, per the PNG standard[0], 0-alpha pixels have their color
technically _undefined_. This means that image editors can use these values
(e.g. XX XX XX 00) for whatever -- generally some way of optimizing, or, more
often than not, just garbage. There are ways to get around this by using an
actual alpha channel in Photoshop[1], or by using certain flags in
imagemagick[2].

[0] [https://www.w3.org/TR/PNG/](https://www.w3.org/TR/PNG/)

[1]
[https://feedback.photoshop.com/photoshop_family/topics/png-t...](https://feedback.photoshop.com/photoshop_family/topics/png-
transparent-alpha-0-pixels-undefined-rgb-values)

[2] [http://www.imagemagick.org/discourse-
server/viewtopic.php?t=...](http://www.imagemagick.org/discourse-
server/viewtopic.php?t=13746)

~~~
munchbunny
While what you wrote is correct, it's not actually the problem being described
in the posted link. The problem in the posted link is just as relevant if the
alpha value was "01" for "you pretty much can't see it but it's meaningfully
there", and has to do with image filtering artifacts as opposed to purely data
representation.

~~~
tgb
Also the problem can occur with any image format; what matters is how it's
stored in the texture not on disc.

~~~
mark-r
But it's specifically a problem with PNG because the spec explicitly allows
encoders to substitute their own RGB values for any transparent pixel. Most
other formats are expected to store the values you give them without
modification.

~~~
tgb
No, the problem discussed in the article can occur with any image format (that
supports alpha) and occurs later in the pipeline so it really doesn't matter
how the texture was stored on disc. Or even whether it was stored - this could
happen with procedurally generated textures.

~~~
mark-r
The article discussed one possible workaround, which was to specify RGB values
for the fully transparent pixels in the artwork. If the PNG tool you're using
substitutes its own RGB values in those transparent pixels, as allowed by the
spec, the workaround doesn't work. That's why PNG in particular is a problem.

If you follow the recommendation at the end to use premultiplied alpha for all
computations, this becomes a moot point.

------
fnayr
This is extremely useful to take advantage of (that you can store RGB values
in 0-alpha pixels). I've written some pretty simple but powerful shaders for a
game I'm working on by utilizing transparent pixels' "extra storage" which
allowed for either neat visuals or greatly reduced the number of images
required to achieve a certain affect. For instance, I wrote a shader for a
characters hair that had source images colorized in pure R, G, and B and then
mapped those to a set of three colors defining a "hair color" (e.g. R=dark
brown, G=light brown, B=brown). If I didn't have the transparent pixels
storing rgb nonzero values, the blending between pixels within the image would
jagged and the approach would have been unacceptable for production quality
leading to each hair style being exported in each hair color. As a total side
note I really enjoyed the markup on the website. Seeing the matrices colored
to represent their component color value is really helpful for understanding.
Nice job author!

~~~
mark-r
Paint Shop Pro had some picture frames that were fully transparent in the
middle. As an Easter egg, a couple of those frames had pictures of the staff
in those transparent areas and you could use the unerase tool to see them.

~~~
wink
Aww, just checked my Paint Shop Pro 4.12 (Build Date: Dec 21 1996) and it
doesn't yet have these picture frames :/

~~~
mark-r
That's because 4.12 didn't have partial transparency.

------
modeless
I don't like this article because it blames the wrong people and buries the
_real_ solution, premultiplied alpha, at the bottom. Already there are many
comments here that are confused because they didn't even see the premultiplied
alpha part of the article.

The issue with the Limbo logo was not that the source image was incorrect. The
image was fine. The _blending_ was incorrect because the PS3 XMB has a bug.
Not using premultiplied alpha when you are doing texture filtering is a bug.

~~~
kllrnohj
I don't think that's a fair summary of the layout of the article. Once the
article reaches the "How to Prevent This Issue" section there's far more space
given to using premultiplied alpha than manually bled images, and it's not
blaming anyone. It just tells an artist how they could fix it and tells
programmers how they could fix it. Nobody is blamed and the "correct" solution
isn't buried at all.

------
VikingCoder
Premultiplied alpha results in less color depth, though. If my alpha is 10%,
then my possible RGB values become 0-25. Even if I multiply by 10, I still
lose the maximum possible values 251-255, and only values 0, 10, 20, 30...
250, are possible.

The correct solution is to pay close attention to all of the factors... and to
be ESPECIALLY aware of pixel scaling. Provide your RGBA textures at the 1:1
pixel scale they will be rendered (or higher!) if at all possible.

~~~
dahart
> Premultiplied alpha results in less color depth, though.

That doesn't matter unless you color-scale the image (like multiply by 2 to
make it brighter) before displaying it. Otherwise, the depth is at the correct
resolution for display.

And premultiplied alpha should be used for final display, not just for the
halo-ing reasons demonstrated here, but for lots of reasons.

Artists should generally be working in un-premultiplied alpha though, and the
premultiplication is something that should happen right before an artist image
is used. Artists shouldn't work in premultiplied images (and they generally
don't) because of the color depth issue, and because it's crazy to paint
premultiplied transparency manually.

~~~
the8472
> that doesn't matter unless you color-scale the image (like multiply by 2 to
> make it brighter) before displaying it.

It does if you stack several image layers. Let's say I have a particular color
tone. Then I use that as background color. And I also stack 10 layers of the
same color with alpha 0.05 on top of that. If you use premultiplied colors
then this will actually result in a different color.

Due to rounding those colors often tend to be more greyish too. So if draw
some vector graphics and have multiple basic shapes with semi-transparent
edges (aliasing!) stacked on top of each other you can get some ugly fringes.

~~~
dahart
Yeah true, I suppose there is rounding error. Is this something that has
actually happened to you, or are you saying it's a problem in theory? I'd be
hard pressed to come up with a real-world use case for 10 of the same color
being comped. For this to be a problem, the visible elements being comped
would have to be _exactly_ the same color, without any gradient or noise at
all...

Even if it did happen, the error is bounded - with 10 layers of the same
color, the maximum error in any channel is 5, and the average error is 2.5.
It's pretty hard to say it would be wildly and noticeably different to most
people even with 8 bit color channels, but I certainly have met some film
directors and CG supervisors who were very color sensitive. It would be
literally invisible in anything higher than 8 bits.

I'm curious -- why do you say the rounded colors would tend toward gray?
Rounding error can happen in both directions, so I would expect rounding
errors to cause a uniformly distributed error -- some colors would get
slightly more saturated, some less, some of them would shift hue, and some
would be unaffected.

You lost me on the edges part & aliasing. Rounding errors will not be visible
as fringes, so if you're seeing fringes, you have some other problem...
perhaps failure to pre-multiply! ;) Tell me more about stacking shapes and
getting fringes. Is this stacking 10 of the same shape in the same place, or
at the edge crossings of different shapes? What kind of fringes are you
seeing?

Anyway, I've never witnessed a case in 20 years of film & game production
where rounding errors caused a visible, detectable problem, but I'd love to
know if there real cases where it's an issue!

~~~
the8472
> Is this something that has actually happened to you

In a toy project where I tried to automatically generate SVG shapes and
rendering them to a html canvas (which uses 8bit per channel premultiplied
alpha). The assembled shapes occasionally overlapped and it did lead to
visible color inconsistencies.

It's also a problem when working with PNGs. When you put your PNG pixels into
a premultiplied space and pull them out again you actually lost information to
rounding, which negates the benefits of a lossless format.

> You lost me on the edges part & aliasing. Rounding errors will not be
> visible as fringes, so if you're seeing fringes, you have some other
> problem... perhaps failure to pre-multiply! ;)

Well, if you got a solid shape then there's no alpha. But the aliasing at the
edges introduces partially transparent pixels. If you then pull out the pixel
data and apply it to a different canvas you get mismatched colors.

I only spent a few hours on it, so I don't recall all the details, but my
conclusion was that it is inadequate for image manipulation since it is lossy
and lacks precision.

------
jamesbowman
Using premultiplied alpha avoids this. Jim Blinn's books from the 90s give a
very thoughtful treatment of the topic.

~~~
AnimalMuppet
The Porter and Duff compositing paper is good, too.

One clarification, though: With premultiplied colors, something like (1,1,1,0)
is either illegal or a light source. It's not a valid normal color.

~~~
foolrush
Entirely false in that it is indeed a completely valid combination. See
luminescent pixels in the Porter Duff paper. It represents a pixel that has no
occlusion and is emitting.

------
Kenji
You also have a similar problem when you render opaque, rectangular images
without the clamp edge mode, and the renderer is in tiling mode, so the
borders wrap around when your picture is halfway between pixels and become a
mix between the top/bottom or left/right colour, corrupting the edges. Easy to
fix, but annoying until you get what it is that corrupts your edges.

Also: "The original color can still be retrieved easily: dividing by alpha
will reverse the transformation."

C'mon, you can't say that and then make an example with alpha=0. Do you want
me to divide by zero? The ability to store values in completely transparent
pixels is lost.

~~~
mark-r
It would be more accurate to say that the original color can be approximated.
The approximation quality goes down as the transparency goes up, until at zero
it gets lost entirely.

~~~
Kenji
You're right. I didn't want to mention precision because it's a whole other
can of worms, but one worth knowing how to deal with. I was looking at it from
a mathematician's point of view, using rational numbers.

------
jayshua
While reading this article, it struck me that the amount of "useless" data
increases as the alpha value approaches 0. For example: in a pixel with rgba
values of (1.0, 0.4, 0.5, 0.0), the rgb values are redundant. Is there a color
format that would prevent this redundancy? Perhaps by some clever equation
that incorporates the alpha values into the rgb values? I don't think
Premultiplied alpha would work, because you still need to store the alpha
value for compositing later...

~~~
dahart
Would you count compression schemes as valid answers to your question, or are
you asking about the raw data not having any redundancy?

There're no channel formats or clever equations I'm aware of that avoids the
redundancy part. But your question totally reminds me of Greg Ward's RGBE
format, which is a high-dynamic range format stored in 8 bits per channel,
with an extra 8 bit exponent channel.
[http://www.graphics.cornell.edu/~bjw/rgbe.html](http://www.graphics.cornell.edu/~bjw/rgbe.html)

RGBE isn't doing exactly what you're asking about, but it's similar in a way.
Instead of storing 16 bits per channel separately for each channel, what you
really get instead is 16 bits (sort-of) for the brightest channel, and the
other 2 channels in 8 bits each - discarding the extra bits. You can't see
them because the bright channel will prevent you from seeing anything super
dark in another channel, so you can discard the extra color resolution in the
darker channels.

If you count compression, then pre-multiplying would help the situation.
Anytime the alpha value gets low or goes to 0, the color channels do too, so
run length encoding or DCT or whatever else will collapse large transparent
areas into all zeroes.

------
panic
Premultiplied alpha is also more "correct" in that it separates how much each
pixel covers things behind it (the alpha value) from the amount of light it is
reflecting or emitting (the color values). These two values should really be
interpolated separately, and that's what premultiplied alpha gives you.

------
br1
I'm surprised that John Carmack seems not to use premultiplied alpha and
recommends bleeding:
[https://www.facebook.com/permalink.php?story_fbid=1818885715...](https://www.facebook.com/permalink.php?story_fbid=1818885715012604&id=100006735798590)

------
Kiro
> pay attention to what color you put inside the transparent pixels

I don't understand this. When I make transparency I don't use any color? I use
the Eraser tool or Ctrl-X, not a color with 0 opacity.

~~~
munchbunny
This is actually a very common problem with 3-D stuff and transparency in
textures. This isn't an issue with the colors of the pixels themselves, it's
an issue with texture filtering. nVidia has a pretty good explanation as it
applies to games and 3d graphics: [https://developer.nvidia.com/content/alpha-
blending-pre-or-n...](https://developer.nvidia.com/content/alpha-blending-pre-
or-not-pre)

Say you have two adjacent pixels using floating point RGBA values of (0,0,0,0)
and (1,1,1,1), and you apply it to a 3-d shape. Because of the rasterization
algorithm, you will be sampling weighted averages of the two pixels, either
because you're scaling up and need to interpolate, or because you're scaling
down and need to average.

The average of (0,0,0,0) (fully transparent) and (1,1,1,1) (opaque white) is
(0.5,0.5,0.5,0.5), a half transparent gray. But you'd intuitively expect
(1,1,1,0.5), half transparent white. This is the essence of the problem. The
fix is to make sure that your transparent pixel was (1,1,1,0) and not
(0,0,0,0).

~~~
twelvechairs
Surely the answer if you want this is to weight the final RGB by the
transparency. E.g. the final red channel would be (R1xT1 + R2xT2)/(T1+T2)

~~~
munchbunny
You're mostly right. The industry-wide accepted answer is to multiply the
opacity into the colors before interpolating, so the formula would be (R1xT1 +
R2xT2)/2 for the average, and then to do later transparency blending as if the
opacity term was already multiplied in.

~~~
twelvechairs
Which means that your output pixel cant be both white and low transparency. I
guess its a typical graphics 'close enough and better performance' outcome
(where mine is marginally more difficult to calculate and needs some more
logic to avoid divide by 0)

~~~
munchbunny
> Which means that your output pixel cant be both white and low transparency.

Did you mean low opacity?

If so, that's not quite right. (0.1,0.1,0.1,0.1) premultiplied is the same
color as (1,1,1,0.1) "normal." They're both white and low opacity, just in
different representations. You don't actually lose much granularity because
the graphics card has to multiply the color channels by the opacity value
sooner or later.

Separately, your formula doesn't work for interpolation. It works for
averaging, but in order to do texture sampling, you need interpolation, so
your formula can't actually be used unless you can adjust it to deal with
interpolation gracefully.

~~~
twelvechairs
Thanks for the answer - I did not know of this premultiplacation. This makes
it effectively the same or very close? Assuming output transparency/alpha is
(T1+T2/2), dividing by this gives the difference

I don't quite get your point on interpolation, but I'll look up when I have
the chance

------
blauditore
This is also relevant for CSS: Some browsers (I think Safari) treat
"transparent" as "transparent black" for gradiets, so "linear-
gradient(transparent, white)" will result in unexpected grayish parts in the
gradient. As a workaround, one needs to use "linear-gradient(rgba(255, 255,
255, 0), white)" instead.

------
leni536
Note that SVG 1.1 doesn't have an option for color interpolation to work in
premultiplied/associated alpha. SVG 2 is not finalized though, I added an
issue some time ago.

[https://github.com/w3c/svgwg/issues/303](https://github.com/w3c/svgwg/issues/303)

It affects gradients, animations and imported+scaled raster images. Maybe
other stuff too, I don't know.

------
eukara
Used to experience this all the time when making maps with custom textures for
older games... Lots of people sure didn't though. Especially source ports that
would apply filtering to games that didn't have any in the first place and
you'd see blue or purple outlines because their original formats were
obviously paletted

------
CurtMonash
I thought this would be about web or email tracking.

------
xchip
DR;TL: use premultiplied alpha for transparency

------
qwerta
There is also performance overhead.

------
ninjakeyboard
s/sawn/swan/

