nvidia on linux [for desktop] is utterly broken.
I ran nvidia cards for almost 15 years (shame on me): laggy X11 compositing, fragile and broken wayland. Broken suspend/resume.
Too many moving parts (selected drivers, modprobe quirks, suspend/resume scripts).
Moved to amd: slick x11, reliable wayland, NO MORE DRIVERS AT ALL, works like charm.
And yes, I do playing in Linux.
Suspend/resume was broken in nvidia since release on aug 2024. I have internal bug id for it. And dozen links with suspend scripts.
No more drivers I mean I don't need to install dozens of packages. While it not big deal by itself, but reverting broken driver is huge deal.
Laggy desktop -- this is my experience until nov 2025 when I dumped nvidia. Desktop on both intel and amd feels like magic after nvidia.
Suspend/resume is broken for my friend with an AMD card right now. That's what I mean: it's broken everywhere in slightly different ways, yes even on Windows. Thankfully I never use it anyway.
Dunno anything about dozens of packages, I installed 1 (one) package from my distro and haven't touched it since, no issues with updates either. That same friend with an AMD card keeps getting random hard PC freezes during gaming though.
Also absolutely zero issues with lags/latency for me (on GNOME. I did experience a bunch of weird bugs with KDE, but again - no lags)
One thing that is very real is DirectX 12 performance. This one really does suffer due to poor nvidia drivers. Hope they iron it out at some point
Which is unfortunately not a good thing when it comes to NVIDIA. "Modern" distros package those for you, which is why I install linux-cachyos-nvidia-open [0] now and previously nvidia-driver-${version} [1] when I was using Pop! OS, both of which worked without a single issue for me from the word "go". My point is: it's not all doom and gloom, there's life to be had and it's not that worse than AMD cards.
Depends heavily on the used hardware AND firmware of the system. I remember having some no-name laptop with a P166-MMX, and Bios from Systemsoft. That thing managed to successfully suspend and resume anything. Be it just to and from RAM, or to Disk in a separate small partition. By anything I mean exotics like NetBSD, FreeBSD, OpenBSD, any Linux I threw at it.
This happens because when columns = p (prime), numbers in each column share the same remainder mod p, creating visible diagonal patterns as multiples of p are eliminated from primality.
Not so much that cols is prime as that cols+1 or cols-1 has lots of factors - see for example 25 or 91 or 119. But it does seem like numbers adjacent to primes have a lot of factors.
The more factors an (even) number n has, the more likely it is that n+-1 is prime because those numbers cannot have any of the factors of n as factors. At the same time it is impossible that n+-2 or 4 are prime and unlikely that n+-3 is prime because 3 is likely to be a factor of n. And if additionally 5 is a factor, the primeless gap is even wider. So the primes stand out.
When the col is seven, there are a lot of diagonals going from top right to bottom left. When col is five, from top left to bottom right. Are runs of consecutive sexy primes also this frequent for larger numbers, or does that pattern break down at some point?
Almost all of these patterns that you see don't really come from primes. If you display numbers not divisible by first 100 natural numbers you get pretty much the same picture.
Because writers don't think about readers. PDF is one of the worst formats for science/technical info, but yet.
I've dumped a lot of papers from arxiv because it formatted as 2-column non zoomable PDF.
Well, the labels of input fields are written in English yet user enters his name in native language.
What's the reason of having name at all? You can call the person by this name. But if I write you my name in my language, what you (not knowing how to read it) can do? Only "hey, still-don't-know-you, here is your info".
In my foreign passport I have name __transliterated__ to Latin alphabet. Shouldn't this be the case for other places?
Unfortunately, the extremely weird and idiosyncratic use of Latin script for English means that straightforward transliterations are usually pronounced incorrectly by people who try to pronounce them according to English rules.
And, on the other end of the spectrum, you have attempts to spell phonetically following English orthography, which then causes issues because it's tied to a particular dialect of English. E.g. the traditional Korean romanization of the name "Park" has "r" there solely so that "a" would actually correspond to an [a] sound in British English - but then, of course, American English is rhotic, and so it ends up being incorrectly pronounced as [park].
If you look at the list, it’s primarily (but not completely) about oddities in their UTF-8 encoding. Most of them appear to be on the boundary of adding additional bytes when the case is changed. That’s not really Unicode’s concern.
There are also some that appear to change from single characters to grapheme clusters, which would be a Unicode quirk.
In another comment I said that a more accurate title would have been "Unicode codepoints that expand or contract when case is changed in UTF-8", which I think covers it well.
UTF-8 is simply an encoding; "UTF-8 characters" is just not correct use of language. Just like, say, "binary number"; a number has the same value regardless of the base you use to write it, and the base is a scheme for representing it, not a system for defining what a number is. This is a common imprecision in language which I have seen cause serious difficulties in learning concepts properly.
"unicode codepoint sequences whose codepoint lengths and/or utf8-code-unit-lengths behave oddly when you change their case" would not fit in a HN title, however
I (OP) said above that "Unicode codepoints that expand or contract when case is changed in UTF-8" would have worked fine, I've changed the Gist title to that in any case. I'm curious if it would've affected the attention it received on HN.