For sufficiently narrow definitions of "bad", sure.
It is probably a bad idea to allow mixed character-set filenames, as that allows homograph attacks, and there are other non-control characters like the zero-width space and it's brethren that should be disallowed across the board.
In English you probably also want to disallow ligature characters like ﬁ, ﬄ.
There are other "good idea" limitations that may affect internationalization of various languages (not in terms of making it difficult, just constraining it, as in the above English ligature example).
For example, it is probably a good idea to disallow Hebrew diacritic symbols like niqqud in filenames.
As someone who would be affected by this directly, I can tell you right away this rule would be a no-go. I plainly need the ability to mix Latin and Cyrillic characters in my filenames. A filesystem or OS that wouldn't let me do so wouldn't even be considered.
A very simple rule of thumb is, if it is a title of a book (or a song, or a film etc), it should also be a valid filename.
Same thing with the letter 'т': in cursive, it becomes 'т', which in many (but not all) fonts looks the same as cursive 'm'.
But I find it an unlikely attack vector to begin with. The main concern with homographs is in URLs and other external resources.
Still though, even if we only block some control characters, doing so could lead to problems with future character encodings.
Personally I hope UTF-8 / UTF-16 / UTF-32 is the final set of character encodings but we can't know that it will be.