There were patches however that kept an 'epoch' (the upper 32bit of a 64bit transaction id) on a page level. Plus some rewrite logic when transactions that are too far away from each other to be represented as an index from a base epoch are about to be present on one page. That'd allow to effectively have 64bit xids.
The in-development zheap storage engine basically does something roughly akin to that, removing the need to perform freezing when a table becomes older than ~2^31 - safety-window transactions.
The transaction id that the system internally has effectively already keeps track of of xids in a 64bit manner, albeit in a somewhat over-complicated manner by keeping track of an epoch separately (there's a patch likely to land in the next version to just go for 64bit there). That's why you can see e.g. txid_current() return 64bit transaction ids.
In most production cases, transactions in flight are going to affect a small amount of rows overall, so you can just keep the data in memory, and store it to disk in a separate table if it gets large.
It's only a tiny fraction of tuples which are recently committed and visibility rules come into play. That can be a special-cased slow-path
Also, in OLTP workload you often have a set of pretty hotly modified data, where you then actually very commonly access recently modified tuples and thus need to do visibility checks.
There's obviously systems with different visibility architectures (either by reducing the types of concurrency allowed, using page level information about recency of modification + something undo based), but given this post is about postgres, I fail to see what you're arguing about here.
That doesn’t hold true when working on multi-terabyte tables or databases when physical disk space in a chassis is actually a boundary you have to consider.
Often disk-space is less the issue, and it's more about whether things can fit in memory or not...