From a technical perspective, HTTP/2 makes little sense when you're commingling requests from different user agents. Header compression is ineffective. Persistent connections provide little benefit, as keep-alive works just as well in a data center environment. Between servers, there is no connection limit (as there is with browsers) so there isn't really head-of-line blocking issues that would go away when using HTTP/2.
In general, you're unlikely to benefit materially from using HTTP/2 (versus vanilla HTTPS) between your edge servers and your application servers.
Servers don't have unlimited connections so HTTP/2 provides more throughput and concurrency over the same TCP connection pool, while also preventing any slow requests from affecting all other requests queued on the same connection. HPACK compression operates on individual headers and saving the keys alone can save significant overhead for smaller payloads, along with the binary protocol.
It's not different than building a HTTP/2 (or HTTP/QUIC) to HTTP/1 gateway. On both sides are HTTP APIs.
The statefulness is on the lower level, where streams/requests are multiplexed. However that doesn't need to be conveyed or transferred during proxying.