I think it's a reasonable implementation. It is only used to separate rows with the same created timestamp. This query is run some time after the insert has happened, so it is unlikely that there have been more rows with the same timestamp at query time.
If this assumption is correct, then first sorting by the timestamp and then sort the id alphabetically will ensure that the pagination is deterministic.
I guess we could consider some edge cases where we have lots of rows with the same timestamp that is inserted after the ingestion query is run due to latency, but it might be acceptable for this use case.
One alternative could be to add a separate column that can be used to keep track of insert order. But they would need to consider the costs before they do this, as it could impact the insert performance.
The true "modern" cool kids solution would of course be to create a service that listens to WAL, inserts it into a Kafka cluster that is connected to a pipe for inserting into Elastic search. Much more fun and resumé friendly than optimizing a query. I bet the author of this blog would get a much larger audience.
Bonus points if the ingestion service was written in Rust and run on a serverless platform