Hacker News new | past | comments | ask | show | jobs | submit login

At $20/mo Digital Ocean offers twice as much memory, 1 TB more transfer and and 4 GB less disk (put it is SSD...)

Digital Oceans lack of "private" network is silly, if you really need it, setup a encrypted tunnel yourself, I would not trust anything else anyway. Also I highly doubt that if your DO DB and App server talk the traffic leaves the datacenter, but you could test this.




But their network is horrible. I moved my irc client/mumble server to DO when they were first announced on HN, and the intermittent lag made it impossible to even chat (mtr confirming multiple times that the issue was on DO's end). If I can't even irc from their servers it doesn't matter what price they charge.


Their network is unreliable in my experience as well.


IRC's the main thing I use my Digital Ocean VPS for - and it's been great.


Hmm, so far I haven't experienced anything like this. /me crosses fingers.

How long ago was this?


I think all in all I spent almost a month on DO, and I believe I left under 2 weeks ago. I have a friend that's still on DO (he likes the low price and needs the RAM to compile rust) and in his experience the network issues are still around but not as bad. Either they've improved things or droves of people like me tried it for a month and ditched.


We've been running many nodes (in the Dallas DC) in production on linode for years and have only seen one significant network issue like you described that we "fixed" by rebuilding the node before Linode support was able to narrow down the cause, so in our experience this isn't widespread.


I think he was referring to DigitalOcean, not Linode.


The entire point of Linode's private network is that you're not routing across the Internet to deliver traffic to a machine in the next rack. If you're not using RFC 1918 space that is properly configured, at least one router has to make a decision on whether to eject the packet onto the Internet or keep it inside, as you allude, which means you've added at least one hop to all private communications. The reasons you don't want to do that will be obvious once you scale a bit.

By all means, encrypt your traffic on the private network if you're so inclined, but encrypting across the public IP space and encrypting across RFC 1918 space do not accomplish the same goal, particularly not with the same latency or redundancy characteristics.


Routers route packets between networks, regardless of the address space being used.

Nothing about the use of a public addresses forces an extra hop in the way you suggest.


I don't have


You only pay for outbound traffic on Linode, is that the same on Digital Ocean?


Since we don't have customer facing analytics for it at the moment, we're not charging :)


Yes.


If you can get your 20 bucks to them in the first place. Their payment processing doesn't seem to be quite state of the art...


But 4x less CPU. And support is a joke.


"But 4x less CPU." You are seriously naive if you think the number of logical cores the hypervisor presents to your VM is the sole determiner of CPU execution resources.

Here is a counter example, imagine I have two VM host servers with 16 logical cores. On one I could pin each VM to 1 logical core, on the other I could run 300 VMs and give each VM 24 logical cores... The first one is going to perform much better.

Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.

"And support is a joke." Not in my experience


He's not naive; you're being overly cynical. This is like pointing out that an SSD could technically be programmed to work much more slowly than a 5200 RPM drive and thus be a worse value than Linode's spinning platters — unless you have a reasonable belief that somebody is actually doing that, it's just FUD.

Based on the benchmarks I've seen, it appears that Linode really does give the kind of concurrency you'd expect from four cores (i.e. if your problem is parallelizable, you can scale up on Linode better than Digital Ocean, whereas Digital Ocean will work much better if your program is serial and needs to hit the disk a lot).


Linode's support, like GitHub, is so good they should be tested for performance enhancing drugs.


Actually, hypervisor details aside, 8 VCPUs fully pegged at 100% user or system time will consume 4x the capacity of 2 VCPUs fully pegged at 100% user or system time in a domU, assuming comparable chips. Always and regardless of how Xen schedules the VCPUs onto actual nodes.

Your hand-waving about the hypervisor is unwarranted, since hypervisor interference under Xen shows up as its own time from the perspective of the domU (steal%) and nobody worth mentioning actually does hosting with VMware.


I was only comparing what each advertises.

You don't know if the first one will perform better or not. It depends on the workload of each VM. If most of the VMs are idle most of the time, when one of them needs CPU the second setup may even perform better.

And I doubt that Linode runs more VMs per host than DO. Judging by the pricing is probably the other way around.


> Also some hypervisors (for example VMware) only executes a VM when they have as many cores as the VM has cores available for execution. So having many logical cores in your VM can negatively influence CPU scheduling.

Good thing Linode and DigitalOcean don't use VMware.


No. Just because you get 8 vCPU on Linode does not mean you have 4x the CPU. You share those 8 vCPU with dozens or hundreds of other servers (depending on VM size). More small slices of a pie that is the same size, is not really more.


On Linode it is never hundreds. "On average, a Linode 512 host has 40 Linodes on it. A Linode 1024 host has on average 20. Linode 2048 host: 10 Linodes; Linode 4096 host: 5;"

On the other hand as far as I know DO don't tell how many VMs share a host.


Support is awesome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: