To be fair, digitally, IJ is simply spelled using the separate letters I and J in Dutch, despite it being a digraph. That codepoint you used is deprecated in Unicode and only there for historic compatibility reasons. Your family members using 'ij' are simply applying the correct orthography, with those using 'y' just digging themselves into a hole.
I don't know what the correct codepoint is but it certainly isn't deprecated in the language and that's the bit that counts, historic compatibility wasn't part of ASCII and that's what caused this. 'ij' takes the same spot as the Greek letter 'y' in the Dutch alphabet and that's what stops this from being resolved to everybody's satisfaction. "Those using 'y'" -> people who didn't have a choice in the matter because the official that made the change did so without their consent in a couple of cases and once it is on your birth certificate good luck trying to change it retroactively across all of your documentation.
You are confusing letter with codepoint and glyph. The IJ is one letter, which consists of two glyphs, and in the Unicode implementation two codepoints. This isn't some historic oversight in Unicode, it was implemented like this based on the Dutch orthography.
Yes, this can lead to bugs in software, but so can anything related to names.
It is deprecated in Dutch to use a single codepoint for the IJ. That was never really an option in any of the character encodings in popular use.
The fact that it is one letter is relevant in cases like (vertical) lettering (which most designers nowadays fuck up), in typography (the number of fonts which make ij look awkward and unaligned is huge), and in collation and sorting using a Dutch locale. I will defend its proper use and treatment where possible, but representing it as a single codepoint is not a sensible goal, and never was.
I do not believe this to be correct. Wikipedia says it's a digraph of two letters. It does say that the codepoint is deprecated, but it's only defined as "compat", not as deprecated in the unicode data.
It is (sometimes) one letter culturally speaking and in lettering. A Dutch alphabet as taught to children used to end in 'X IJ Z' instead of 'X Y Z', although this is no longer the case ever since people started eating 'yoghurt' in the twentieth century. In capitalisation of words too it is treated as a single 'letter' (e.g., 'IJsselmeer' for the lake; note the uppercase 'J').
If you dig deeper on the Unicode website you'll find that the reason those codepoints are included is compatibility with 'certain very rare legacy (non-Unicode) character encodings'. They are not 'deprecated' as compatibility characters for those old legacy encodings, but 'deprecated' as suitable for rendering Dutch text unencumbered by those early code pages.
Words have meanings. "deprecated" has a very specific meaning in Unicode and it does not apply to this code point. A lot of characters we use on a daily are marked <compat>.
This is not a codepoint in daily use. It never was outside of those few legacy encodings. If it is not deprecated, it is not so because it never was in common use in the first place. The concept of the ij/IJ as a single codepoint is deprecated, regardless of the technical classification in Unicode.
If you are claiming that 0x0132 is a codepoint in common use or required for correctly spelled Dutch, you are mistaken.
> If you are claiming that 0x0132 is a codepoint in common use or required for correctly spelled Dutch, you are mistaken.
I made no comment in support or opposition of that. I cannot talk to that as I’m not familiar with either Dutch or that letter. However there are many compat characters in Unicode and there are incredibly few deprecated ones so I was addressing the deprecation claim (and what I believe is a misuse of the term letter). You’re interpreting things into my replies that just aren’t there.
Compat characters are very useful and even if they are not stored all the time, they often show up in text processing in memory for better glyph selection.