If you don't know what the word latency means, this setup will teach you all about it.
Another problem is that your various providers will probably charge you for WAN bandwidth. So you will pay three times for every request: Twice from backend to balancer (one charge from provider A, one from provider B) plus another charge to send the same data back out from the balancer to the customer.
Plus you will be miserable trying to keep your site up 100% of the time across two cloud providers. Have a problem on either one, and 50% of your capacity will go offline.
Might be better to realize that "my servers were seized by the FBI" is a rare occurrence and you can probably afford a few hours' worth of downtime and/or data loss. Make offsite backups from your primary provider on a relatively long timescale (once per day, maybe once per hour if you're more sensitive; live database replication for the crazy-sensitive) and have a procedure for spinning those up at a secondary provider. Test that procedure every month or so. The beautiful thing about cloud services is that you can pay for your emergency-backup servers by the hour and only when you are using them, or testing them.
If you're doing something where you might get the FBI seizing your servers - maybe latency is not a paramount concern.
Capacity might also not be of primary importance.
I am just trying to find the 100% most resilient form of online hosting that masks the layers as much as possible.
The tin-foil-hat in me can see many many reasons why one would want to be aware of how to accomplish something like this.