1) IPVS with DSR. You'll get about 1M PPS through each "load balancer". Regular, cheap Linux boxes do well here. You can get fancier here, but a pair of these would scale most games just getting started.
2) Run a dispatch server. This is what we used in MSN Messenger. I'm not going to go too deep into the details, but the gist of it is that we had two groups of user facing servers: connection servers (CS) and dispatch servers (DP). Connection servers is where your client TCP connection terminated and was responsible for updates to your client. The DP server merely "dispatched" you to a CS upon initial client connect. The DP had no role other than seeing which CS servers were alive and sending you to one of the least loaded. The CS servers advertised their existence and load on a multicast address. Yeah, we had a hardware load balancer in front of the DP servers, but we very easily could have used a software implementation. We were part of Hotmail so there was loads of networking hardware to use.
Client -> Cisco CSM -> DP server -> redirect to CS -> CS
After that, the CSM and DP server were out of the loop and everything went to/from the CS directly. Might not be super elegant, but it worked really well.
Random is significantly easier and cheaper to implement. As you scale your service to multiple back-ends for the first time, you pragmatically start off with a random client-side load balancer - because it is cheap and simple.
Load balancing is one of those things in your product that once working to some minimum extent, nobody cares about improving. Swapping out the client side load balancer for a 'real' load balancer is the last thing on team's mind. So, you'll see a very large number of mature products continuing to use them.
Last point, random-client side load balancing provides a true solution for SPOF where as in the other methods, you still have at least on SPOF somewhere in the system.
There are heaps of other advantages by moving to HTTP/2, including an average performance boost of 15%. It also transparently downgrades to HTTP/1 for unsupported clients.