Numerous times I've had EC2 and EBS go out of contact (or simply have huge network latency, essentially the same). Since my instances run off RAID arrays of EBS volumes, essentially everything dies until EBS reappears.
I am not a virtualization wizard, but I always thought Xen and other hypervisors have CPU usage caps. If you're not getting built-in protection from noisy neighbors then VPS becomes just a crappier version of good old shared hosting. True? False?
P.S. I've been Slicehost user for 3+ years and all instances I've ever had there exhibited pretty much expected performance.
If they guarantee X performance but all along they were providing 2X and now they're only providing X, customers will complain. There are different ways to configure Xen so it's not clear exactly what Amazon is doing.
Shouldn't Amazon be providing customers with reports to this effect? It can't be that difficult to say that within each 10-minute interval i you got X_i% of a CPU, received N_i packets, etc.
This is the sort of data Amazon should collect themselves to analyze the systems within the cloud, so the customer never (OK, rarely) sees a problem.
This sounds reasonable, but the method could be improved by instantiating a new instance first and then removing the old one - this ensures you don't instantiate in the same location as before.
I don't know EC2 architecture well enough to know, but there may even be some ways of telling if your new instance is located in the same problematic instance as the original one (perhaps by tracerouting the problem instance and finding it to be v near by). If this happens presumably you can instantiate a 3rd and repeat until you find a suitable instance at which point you kill the others.
Our "feelings" are different, and since the author does not provide a single number and small ec2s still perform the way they always did I can only conclude that their software or web app is becoming bloated, or they dont know how to measure.