There's no chance we go for 64bit transaction ids on the tuples themselves - the space increase would be far too big. The overhead of tuple headers is already a problem, and xmin/xmax are a significant portion of that.
There were patches however that kept an 'epoch' (the upper 32bit of a 64bit transaction id) on a page level. Plus some rewrite logic when transactions that are too far away from each other to be represented as an index from a base epoch are about to be present on one page. That'd allow to effectively have 64bit xids.
The in-development zheap storage engine basically does something roughly akin to that, removing the need to perform freezing when a table becomes older than ~2^31 - safety-window transactions.
The transaction id that the system internally has effectively already keeps track of of xids in a 64bit manner, albeit in a somewhat over-complicated manner by keeping track of an epoch separately (there's a patch likely to land in the next version to just go for 64bit there). That's why you can see e.g. txid_current() return 64bit transaction ids.
Why write the transaction ids in the tuples at all?
In most production cases, transactions in flight are going to affect a small amount of rows overall, so you can just keep the data in memory, and store it to disk in a separate table if it gets large.
You need to access that data from different connections, so it needs to be correctly locked etc. Looking purely at the tuple you need to know where to look for the tuple visibility information. Accessing data stored in some datastructure off to the side will also have drastically worse cache locality then just storing it alongside with the data. E.g. for a sequential scan these checks need to be done for every tuple, so they really need to be cheap.
In a system like postgres' current heap you cannot know whether it was committed long ago, without actually looking in that side table (or modifying the page the one time you do, to set a hint bit). You pretty fundamentally need something like the transactionid to do so.
Also, in OLTP workload you often have a set of pretty hotly modified data, where you then actually very commonly access recently modified tuples and thus need to do visibility checks.
There's obviously systems with different visibility architectures (either by reducing the types of concurrency allowed, using page level information about recency of modification + something undo based), but given this post is about postgres, I fail to see what you're arguing about here.
Depends on your data. If you have a link table that just contains two 4 byte integers, then yes, that's pretty significant overhead. Even if you have a few hundred bytes per row, it's still not entirely negligible.
Any overhead is generally considered irrelevant when you’re on a small database. For example, going from 50GB in a table to 75GB is 50% overhead, but easily handled in a time where you just pay for more GB on RDS.
That doesn’t hold true when working on multi-terabyte tables or databases when physical disk space in a chassis is actually a boundary you have to consider.
> Any overhead is generally considered irrelevant when you’re on a small database. For example, going from 50GB in a table to 75GB is 50% overhead, but easily handled in a time where you just pay for more GB on RDS.
Often disk-space is less the issue, and it's more about whether things can fit in memory or not...
There were patches however that kept an 'epoch' (the upper 32bit of a 64bit transaction id) on a page level. Plus some rewrite logic when transactions that are too far away from each other to be represented as an index from a base epoch are about to be present on one page. That'd allow to effectively have 64bit xids.
The in-development zheap storage engine basically does something roughly akin to that, removing the need to perform freezing when a table becomes older than ~2^31 - safety-window transactions.
The transaction id that the system internally has effectively already keeps track of of xids in a 64bit manner, albeit in a somewhat over-complicated manner by keeping track of an epoch separately (there's a patch likely to land in the next version to just go for 64bit there). That's why you can see e.g. txid_current() return 64bit transaction ids.