And here's an FAQ from Unicode itself:
Old-timers (particularly outside the US) may remember the ISO 8859 debacle, where there were various encodings for primarily European languages using the same codepoints, causing tons of confusion:
It isn't a real encoding, it's a nasty hack and that's what makes the transition difficult.
> The Unicode Standard does not define glyph images. The standard defines how characters are interpreted, not how glyphs are rendered. The software or hardware-rendering engine of a computer is responsible for the appearance of the characters on the screen. The Unicode Standard does not specify the size, shape, nor style of on-screen characters
Also, most of the technical content of the article is gibberish. Exhibit A: "It made use of the visual typing and encoding method as one would write it on paper, rather than using logical linguistics and computer encoding conventions of Unicode."
I wonder why this news outlet tried to report this specific piece of news at all, instead of leaving it to the specialist press.
In web sites, however, these text become mess because they use regular fonts with correct glyph/code-point match. We just abuse the fonts to get the characters we needed.
When Sinhalese characters are in Unicode, we couldn't immediately translate them because you need to check if the fonts were using this botched font or not, and needed to do some serious replacing which is difficult because even with Unicode, we have diacritics and certain glyphs needed more than one unicode code-point to represent them.
One glorious regex replace, in theory, can perform the similar migration for Burmese as well, you just have to write it.
That's just what I got from the article and I have no other knowledge of the Burmese script so I may be wrong.
Short version: in Burmese, the form a character takes depends on context. Zawgyi ‘solves’ that by having separate code points for the different forms, requiring the user to pick the right variant. The Unicode way is to make the font and the (font + font renderer) pair smarter, just as Unicode renders “é” instead of the two code points “e’”.
Zawgyi also, necessarily, uses Unicode code points assigned for other characters to encode the variants.
ς and σ. I won't shout out their names. Is this the case in modern Greek too?
That doesn’t explain why Unicode seems to have 27 (!) different “sigma” code points, though (https://en.wikipedia.org/wiki/Sigma#Character_encoding)
Imagiŋe if all the "n"s became "ŋ" if you accideŋtally used the Helvetica British foŋt instead of Helvetica Americaŋ oŋ your website.
Even if they did, the Ideographic Variation Database  doesn't exactly make it easy to use variation selectors for that purpose, because you only get an example demonstrating what the glyphs should look like. To find out which glyph (and hence variation selector) to use for a given language, you'd need an additional database.