Linode is hard to beat these days. I've been a customer for a few years and adore the shit out of them.
Digital Ocean has become quite a competitor (and I have a box with them now for a staging environment) but the fact that they do not provide a private/internal network between your boxes is something that I can't live without. I like to keep app servers and db servers isolated from the real world by one box in front of all of them ... which is not something you can currently do with Digital Ocean.
I love Heroku's platform and think they've done some amazing things for our industry, but I don't agree with any of their recent PR moves. Linode, on the other hand, has never stirred up drama or bullshit since I have started working with them. They're straight shooters and this gives me a great deal of confidence.
I had a personally hair raising experience this weekend trying to migrate a smaller server that kept swapping over to a larger one without too much downtime. I was able to do this pretty painlessly but man I wish I would have waited just a few days for this!
At $20/mo Digital Ocean offers twice as much memory, 1 TB more transfer and and 4 GB less disk (put it is SSD...)
Digital Oceans lack of "private" network is silly, if you really need it, setup a encrypted tunnel yourself, I would not trust anything else anyway. Also I highly doubt that if your DO DB and App server talk the traffic leaves the datacenter, but you could test this.
But their network is horrible. I moved my irc client/mumble server to DO when they were first announced on HN, and the intermittent lag made it impossible to even chat (mtr confirming multiple times that the issue was on DO's end). If I can't even irc from their servers it doesn't matter what price they charge.
I think all in all I spent almost a month on DO, and I believe I left under 2 weeks ago. I have a friend that's still on DO (he likes the low price and needs the RAM to compile rust) and in his experience the network issues are still around but not as bad. Either they've improved things or droves of people like me tried it for a month and ditched.
We've been running many nodes (in the Dallas DC) in production on linode for years and have only seen one significant network issue like you described that we "fixed" by rebuilding the node before Linode support was able to narrow down the cause, so in our experience this isn't widespread.
The entire point of Linode's private network is that you're not routing across the Internet to deliver traffic to a machine in the next rack. If you're not using RFC 1918 space that is properly configured, at least one router has to make a decision on whether to eject the packet onto the Internet or keep it inside, as you allude, which means you've added at least one hop to all private communications. The reasons you don't want to do that will be obvious once you scale a bit.
By all means, encrypt your traffic on the private network if you're so inclined, but encrypting across the public IP space and encrypting across RFC 1918 space do not accomplish the same goal, particularly not with the same latency or redundancy characteristics.
"But 4x less CPU."
You are seriously naive if you think the number of logical cores the hypervisor presents to your VM is the sole determiner of CPU execution resources.
Here is a counter example, imagine I have two VM host servers with 16 logical cores. On one I could pin each VM to 1 logical core, on the other I could run 300 VMs and give each VM 24 logical cores... The first one is going to perform much better.
Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.
He's not naive; you're being overly cynical. This is like pointing out that an SSD could technically be programmed to work much more slowly than a 5200 RPM drive and thus be a worse value than Linode's spinning platters — unless you have a reasonable belief that somebody is actually doing that, it's just FUD.
Based on the benchmarks I've seen, it appears that Linode really does give the kind of concurrency you'd expect from four cores (i.e. if your problem is parallelizable, you can scale up on Linode better than Digital Ocean, whereas Digital Ocean will work much better if your program is serial and needs to hit the disk a lot).
Actually, hypervisor details aside, 8 VCPUs fully pegged at 100% user or system time will consume 4x the capacity of 2 VCPUs fully pegged at 100% user or system time in a domU, assuming comparable chips. Always and regardless of how Xen schedules the VCPUs onto actual nodes.
Your hand-waving about the hypervisor is unwarranted, since hypervisor interference under Xen shows up as its own time from the perspective of the domU (steal%) and nobody worth mentioning actually does hosting with VMware.
You don't know if the first one will perform better or not. It depends on the workload of each VM. If most of the VMs are idle most of the time, when one of them needs CPU the second setup may even perform better.
And I doubt that Linode runs more VMs per host than DO. Judging by the pricing is probably the other way around.
> Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.
Good thing Linode and DigitalOcean don't use VMware.
No. Just because you get 8 vCPU on Linode does not mean you have 4x the CPU. You share those 8 vCPU with dozens or hundreds of other servers (depending on VM size). More small slices of a pie that is the same size, is not really more.