I mocked up a fiddle to demonstrate this point. https://jsfiddle.net/dahart/m4z61tz1/
*edit: a little explanation, the fiddle shows, in the top half, blending using the most commonly learned method for non-premultiplied colors/textures, with gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
In the bottom half, blending is done with pre-multiplied colors, and gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA).
The main point is how, using WebGL's default settings, you can see through what should be opaque areas on top, and the stuff on the bottom looks and behaves correctly.
The fiddle has enough toggles and comments in the code and fragment shader to get the non-pre multiplied blending & compositing to work correctly.
You also lose precision in general, which can lead to a grey-brown soup at semi-transparent fringes. He mentions 16bit colors further down, but they're not always available, e.g. in webgl.
Both pack floating point RGB into 32 bits per pixel. Combine that with a separate alpha channel and you're only at 40.
Don't ask me how spectacular the support is.
With transparent black pixels, premultiplying by alpha does nothing.
It's changing the shaders to interpret a value like (128, 0, 0, 128) differently that fixes the rendering.
Every non-zero color avoids this coincidence, and you have to change both image and shader to get the correct result. Just thought I'd post in case anyone else had the same confusion.
So in the case of this specific color, invisible black, you only need to change a shader to fix the problem. This, plus the author constantly talking about 'premultiplying' as the fix, made it harder for me to understand that the fix is equal parts the texture change and the shader change.
The method the article details is the correct implementation for a variety of reasons that it goes into. Yes, there are different ways you can handle 100% transparent pixels on the edges, but only premultiplied alpha handles all edge cases that we're concerned about.
It's not that the article says anything wrong, it's that it uses an example texture where there are no translucent pixels and no hidden colors. In a texture like that, premultiplying by alpha does nothing. In a texture like that, you only need to add a division by alpha to your shader.
The combination of using premultiplied textures and altered shaders solve the general case.
Using only altered shaders solves only an edge case.
But the article only shows that edge case.
That made it harder to follow.
It is true, and interesting, that whether you use unassociated colors and blending functions or you use premultiplied colors & blending functions, either way, what comes out is a premultiplied color!
This is true of any software compositor or renderer too, right? It's a fact of rendering, not just of GPUs, that output colors are premultiplied. The over operator multiplies colors by the source alpha values while you blend - that is where "premultiplication" happens when you render or composite something. Some software renderers have an "un-premultiply" flag to do a divide right at the end, so you can spit out files with unassociated colors in them, but otherwise anything rendered comes out premultiplied.
I believe this is exactly why WebGL defaults your canvas to premultiplied compositing over the rest of the page. Your output is premultiplied even when your input isn't. It is also a good reason to think in pre-multiplied colors and use the blending functions that assume pre-multiplied colors, and pay attention to, and convert/premultiply on the fly, when you need to accept unassociated color sources like hand-painted textures.