There is an excellent discussion of the topic. I find it utterly definitive in the way it relentlessly shows how you can't "fix" this issue completely any other way than by ruling out the bad characters by making the kernel disallow them.
IMO the best approach would be to separate between file name and the file object. When I edit a file with vim, should vim really need to know the name of the file? No. Likewise for a lot of other utilities as well. If instead of being so focused on file names and paths everywhere and we operated instead mainly on inodes then I think much would have been won. Now in some instances the file name is of interest to the program itself, for example if you attach a file to an e-mail, upload it with a web browser, tar a directory, etc. but in all of these instances I think that the file name should be more separate and even for most programs that want the file name they should just treat the file name as a collection of bytes that have close to no meaning.
In other words, I would want to translate paths and file names into inodes in just a very select few places and then keep them separate.
This is what I am going to do in my Unix-derived operating system. I will get around to implementing said system probably never but you know, one can dream.
For sufficiently narrow definitions of "bad", sure.
It is probably a bad idea to allow mixed character-set filenames, as that allows homograph attacks, and there are other non-control characters like the zero-width space and it's brethren that should be disallowed across the board.
In English you probably also want to disallow ligature characters like ﬁ, ﬄ.
There are other "good idea" limitations that may affect internationalization of various languages (not in terms of making it difficult, just constraining it, as in the above English ligature example).
For example, it is probably a good idea to disallow Hebrew diacritic symbols like niqqud in filenames.
As someone who would be affected by this directly, I can tell you right away this rule would be a no-go. I plainly need the ability to mix Latin and Cyrillic characters in my filenames. A filesystem or OS that wouldn't let me do so wouldn't even be considered.
A very simple rule of thumb is, if it is a title of a book (or a song, or a film etc), it should also be a valid filename.
Same thing with the letter 'т': in cursive, it becomes 'т', which in many (but not all) fonts looks the same as cursive 'm'.
But I find it an unlikely attack vector to begin with. The main concern with homographs is in URLs and other external resources.
Still though, even if we only block some control characters, doing so could lead to problems with future character encodings.
Personally I hope UTF-8 / UTF-16 / UTF-32 is the final set of character encodings but we can't know that it will be.