The cost that everyone fails to recognize is the cost (performance-wise) of virtualization. Simply comparing X instances of EC2 to X instances of bare metal ignores the fact that a bare metal instance could range anywhere from 1x to 50x more performance for equivalent specifications. For memory bound applications you might get equivalent performance for the specs but for IOPS-bound or CPU-bound you'll probably take an order of magnitude performance hit for the same specification of hardware when virtualized.
Any analysis like this needs to first find a set of comparable servers that have performance parity (for the given application) first instead of merely specification parity.
A better comparison might be dedicated hardware versus virtualized in EC2 than virtualized on stand alone hardware as EC2 is a shared environment.
I understand that technically there shouldn't be much disparity for CPU bound tasks (because most instructions are translated directly), but our benchmarks show a 40x performance hit for heavily CPU bound, single threaded tasks between a large EC2 instance and an entry level softlayer dedicated box (doing computer vision work). Perhaps it's a caching issue caused by Xen and other VMs sharing the hardware or perhaps its the fact that EC2 is built on older hardware that might not support some of the more advanced CPU features. Regardless, core for core on a anecdotal level, we have seen a stunningly large impact by switching to dedicated hardware.
That's true, but nearly every virtual server provider oversubscribes the physical hosts; there's not a 1:1 correlation between virtualized and physical CPUs. Consequently, your virtual machine may have to wait a while for its process to be requested.