Right now, I ping ~50ms out to us-east-1 and things feel "pretty good" in our server-side web UIs. If I could drop this by a factor of 10, we are getting into gaming monitor latency territory, and pure UI state changes could be resolved in timeframes that would be perceptually instantaneous for most users. I.e. things like clicking a button to pop a modal you wouldn't even worry about trying to make a client-side interaction anymore. You'd just wire it up using some trivial @if(showModal) inclusion block on the server-side html page template.
Granted, this imposes a pretty harsh geographic constraint if you have just the 1 server, but it is likely feasible to separate the view layer from your persistence/stateful layers, so you could host your view rendering services in multiple local zones, with all the business logic and state kept in one of the primary regions. Not all things can always be instantaneous, but if the UI is highly-responsive there are countless UX approaches for indicating to a user in a friendly way that they simply need to wait for a moment. Being able to build your web UI around blocking calls into business logic seems like a powerful place to be in terms of simplicity and control.
IDK if making these based on the same infra would be the right abstraction layer.
My personal belief is that running the "region" is far more expensive than running their other regions. Real estate in the bay area is expensive, as is electricity.
My hunch is that us-west-2 is more popular (literally every company for which I've worked that used or - currently - uses AWS chose us-west-2 over us-west-1, even when us-west-1 is geographically closer).
It's on here.
ping -c 5 18.104.22.168
PING 22.214.171.124 (126.96.36.199): 56 data bytes
64 bytes from 188.8.131.52: icmp_seq=0 ttl=235 time=16.018 ms
64 bytes from 184.108.40.206: icmp_seq=1 ttl=235 time=13.120 ms
64 bytes from 220.127.116.11: icmp_seq=2 ttl=235 time=23.026 ms
64 bytes from 18.104.22.168: icmp_seq=3 ttl=235 time=13.656 ms
64 bytes from 22.214.171.124: icmp_seq=4 ttl=235 time=15.517 ms
AWS East is in Ashburn, VA, just up Route 28 -- a stones throw from Dulles Airport. Not sure if they're still in the Dupont Fabros buildings or the Equinix campus.
If interested, I listed them all out a while back here: https://news.ycombinator.com/item?id=20876909
All of my Asian bandwidth comes through LA or Vancouver. Big, cheap peering with other telecoms to cross the ocean. We already have a decent colo presence in LA, this will allow us to consolidate some of that, and move other VMs.
Google has a new facility in the area as well: https://cloud.google.com/about/locations/la/
But, maybe it needs to do super high quality realtime 4K greenscreen that would overload an embedded CPU, and there's a built-in AWS library that does it, and the bandwidth is high enough, and you don't want to buy dedicated greenscreen HW that sits next to your booth (which is the other thing we did) then maybe.
if 'us-west-2' in region:
// do it
That is why our primary location is in Denver. No earthquakes, no terrorism, no floods.
rsync.net customers all over California happily keep their backups in boring old Denver ...
(with a one hop route into the he.net core in Fremont ...)
Which would you choose?
Scroll down to AWS Global Infrastructure Map
Wow, yet more obscure non intuitive naming from AWS. Why not just us-west-3a-local?
Your suggestion makes no sense, it implies an entirely new region ‘us-west-3’ but with an entirely different naming scheme (no regions have an ‘a’ on the end of them) and the ‘local’ provides zero context about it with no expansion capability.