Allow me to doubt this. Intel Arc A770 16GB has a TSMC manufactured die but it's much larger in size that an RTX 3060 12GB sells for way less, also at a (small) profit.
I don't think either of these are intended for the datacenter? So this would be a price comparison between two gaming / general purpose GPUs?
The A770 is going for $350ish, is manufactured on a higher tech node (TSMC N6), and has a die size of 406mm square.
The 3060 is going for $250ish, was manufactured on two lower tech nodes (originally Samsung 8nm then moved to TSMC 7nm), and has a die size of 392.5.
I'm not sure the description "much larger in size" makes sense here. If anything it seems like pricing follows node-area costs.
Nvidia is now selling us less silicone die for way more money, and their sales numbers reflect that.
If we're very lucky, the game will get tested against a wide range of hardware. Sane defaults for GPU feature flags based on detection of hardware capability and the results of that testing will result in a default user experience that doesn't need manual adjustment. If we're lucky.
So tl;dr, they're not targeting the top 1% of hardware. Typically they just screw things up and don't catch it. But higher end hardware will handle poor optimization better and provide a better experience. So it can feel like they're 'targeting' that hardware when really its a poorly optimized port.
Is there more information about this?