I've never had an issue with their blobs or CDN speeds, but the app service disk latency is an issue. Specifically, in my case, with PHP apps.
If you Google/Bing for "Azure Website slow" this issue is the cause is 99% of all the complaints. The current solution "Local Cache" is lacking. It does work, but not in my use cases.
what matters is:
- chosen vm size
- disk striping (or apps that use multiple disks)
- whether you attach with caching on or off, so you can work with both throttles
As usual, the interesting information comes not during regular operation, but during incidents, where it is not uncommon for the latency to spike _massively_.
The percentiles are in the images, I was being lazy and didn’t add them to the text. If enough (well a few) people want them I’ll add them.
Azure storage either works for days and weeks 100% consistently or they have an outage and are down for days so even when I have run longer tests they don’t show any didference.
If you monitor and they have an outage the latency basically goes to a day an iop :)
I agree on the last part - but I think it's important to measure the effects during outage too. One we often see is fsync operations taking minutes with write caching disabled - this is more common than just during outages however!
I am surprised by the poor performance of the remote SSD. I was under the impression that the ip latency within the same datacentre would be <1ms. Does anyone know what is causing this?
If you're using the free tier of App Service plans, yes you're going to have perf limitations. And that's not Azure specific.
If you choose hard drives that are HDD and not SSD, you're going to have perf limitations. That's not Azure specific.
The costs for SSD is about the same between AWS/Azure/GCP.
There are some bugs with the new archive tier and some weird behaviors for non ssd disks when you start forcing high queues.