Not necessarily. A page on a remote NUMA node is mappable, but with slower access. The system needs to smartly choose what pages to put there and when to migrate them back for performance and balancing. CXL memory nodes are mappable as well. Zswap is not, but needs to only be decompressed in RAM, in contrast to the disk pagefile where you need to block and read from the terribly slow disk. Protecting process workingset, choosing what pages need to be where at every given point in time without degrading performance while maximizing memory utilisation and reducing memory costs at a Google infra scale is a super super tough challenge.
Sorry, didn't mean to minimize the task here but what you just described is still a pagefile - tiering data according to usage patterns and placing it onto different media with varying latency and throughput characteristics is a concept nearly as old as storage. It's cool that Google has made it Borg compatible, but it's also not exactly a breakthrough concept.