The parent of my post is correct; in practice we're still going to need the occasional ability to shim in browser-specific fixes, because even if the browsers do their best, they're going to inadvertently lie in the future and claim to support WebVR1.0 in Firefox 92, but, whoops, actually it crashes the entire browser if you try to do anything serious in it. Or, whoops, Firefox 92 does do a pretty decent job of WebVR1.0 but I need some attribute they overlooked. Or any number of similar little fixups. We know from experience from the field in the real world that we're talking about crashing bugs here at times; this is real thing that has happened. Whatever proposal gets implemented should deal with this case too.
If we standardized on the format like I suggested at the end of my post, it would go a long ways towards preventing future browsers from mucking up the field. If you just get "$BROWSER $VERSION $OS" in a rigid specification, and if the major browsers are sure to conform to that, and the major frameworks enforce it, it'll be enough to prevent it from becoming a problem in the future. It won't stop Joe Bob's Bait Shack & Cloud Services from giving their client a custom browser and/or server that abuses it, but there's no stopping them from doing things like that no matter what you do, so shrug.
Culturally, you should prefer to use feature detection. Most developers would never need to use anything else. But when Amazon makes its new whizbang WebVR1.0 front-end in 2024, they may need the ability to blacklist a particular browser. Lacking that ability may actually prevent them from being able to ship, if shipping will result in some non-trivial fraction of the browsers claiming "web-vr/1.0.0" will in fact crash, and they have nothing they can do about it.
Besides... they will find a way to blacklist the browser. Honestly "prevent anyone from ever knowing what version of the browser is accessing your site" is not something you can accomplish. If you don't give them some type of user agent in the header, it doesn't mean the Amazon engineers are just going to throw their hands up and fail to ship. They will do something even more inadvisable than user agent sniffing, because you "cleverly" backed them into a corner. If necessary, they will examine the order of headers, details of the TLS negotiation, all sorts of things. See "server fingerprinting" in the security area. You can't really stop it. Might as well just give it to them as a header. But this time, a clean, specified, strict one based on decades of experience, instead of the bashed-together mess that is User-Agent.
Or, to put it really shortly, the fact that a bashed-together User-Agent header has been a disaster is not sufficient proof that the entire idea of sending a User Agent is fundamentally flawed. You can't separate from the current facts whether the problem is that User Agent sniffing is always 100% guarantee totally black&white no shades of grey mega-bad, or if it's the bashed-together nature of the field that is the problem.