Hacker News new | comments | show | ask | jobs | submit login
DigitalOcean introduces Load Balancers (digitalocean.com)
308 points by AYBABTME 219 days ago | hide | past | web | 136 comments | favorite

"No ops needed". Every time I see such piece of PR I cry a little inside. Sure, spread such things even more, so that everyone around, devs, managers, business take it even more seriously and build more horrible, unsustainable, overpriced, insecure and failing infrastructures and services. No ops needed in the serverless lambda cloud! ;)

Operations always were those guys that kept complaining about "reliability" and "maintainability" and how the perfect thing that worked on my machine woke them up in the middle of the night, and how I must bother with inconsequent things such as "packaging" and "configuration" and "dependencies." Those incompetents, glad to see them gone.

On a serious note, you are absolutely right. IT always had two opposing forces for a reason, it provided balance between change and sustainability. The big problem was lack of communication between the two sides, which "devops" was supposed to solve.

Instead, "devops" is now developers doing what they've always done, and caring for change above all like they always did, but pretending to care about the needs of services in production. I cringe when I think about all those containers where the application is continuously delivered but the bundled openssl isn't updated when vulnerabilities are found. Welcome to the brave new world.

We're moving in a no-ops direction mainly because the most vocal folks come from startups that don't last enough to see where coherent operations matter. They go under well before that. But this idea is bleeding onto companies where it does matter, and we'll see how that goes in a few years.

We started a Devops team at work. After a year there have been 0 Ops hires, it's all Devs.

I think/hope we might see this change soon. Currently there's no real penalty for poor ops. If your customer data gets hacked, there are very few actual penalties, most of the pain is in reputation. I feel like there's been a lot of pressure to have government penalties for poor security practices, especially when so many companies "need" all the user data they can get their hands on.

> After a year there have been 0 Ops hires, it's all Devs.

You say that like it's a bad thing. If you want DevOps to be successful, you don't hire a "DevOps team". You hire a team of devs to make ops tools that are so easy to use that all the other devs can manage their own ops.

The idea is that doing "good ops" is so easy that everyone in the company does it.

You do need at least someone who really knows what "ops tools" need to do, someone with sufficient experience to know the difference between "good ops" and "the first idea some non-ops experience developer had that looked like it solved the problem on his local machine"...

Sure. But you can usually find a Dev who used to do ops that can fill that role.

From my (admittedly somewhat limited) experience, an old-school sysadmin (who's likely moved into dev for the better paycheque) is often _way_ better at filling that role thad a more dev oriented person who's done some devops in the past.

In my experience, "good ops" people are the ones with lots of war stories to tell of disasters or narrowly averted disasters either of their own of of friend/colleagues they hang out with. It's a profession who's lessons seem to be best-taught by spectacular failures and heroic recovery efforts, rather than college courses, vendor/consultant training, or "industry best practices" documents...

As I journey through my nascent (6? year) career in a sysadmin role (support previous to that), it strikes me more and more that ops is a craft, like blue-collar trades. You can't become a hot-shot ops staffer by being smart and attending a month of intensives; so much of it is learned through experience (and discussion). I think of myself more as a 'journeyman sysadmin' than a 'midlevel sysadmin'...

Absolutely! I always suggest to young folks who ask how to get into operations that they should set up a linux box and use it as their primary system. There is no better way to rack up your own war stories than trying to use a linux box on a day to day basis.

I agree. My point was that OPs complaint was about not having any ops hires, and I was saying that he shouldn't need any. He can find a crusty old sysadmin who figured out that making their job easier by coding is a good thing. :)

> You hire a team of devs to make ops tools that are so easy to use that all the other devs can manage their own ops.

We used to have people to make those ops tools, we called them ops. The whole devops things seems to be based on the false notion that ops didn't automate their work.

Equating "ops" with "tools" is just another symptom of the problems I was referring to. Tools are not the solution. Tools do not replace process, practice and goals. Tools are just tools.

As a developer, absolutely. You need that kick in the ass from the grumpy old bastard who refuses to setup your queue with a max depth of 1 million, or open up the firewall between the database and the DMZ.

As a developer, I hate ops people for making my job slower, but I can't say they're wrong. They make stuff reliable and backed up.

i'm just glad there is someone thinking about those important things i dont want to think about.

> build more horrible, unsustainable, overpriced, insecure and failing infrastructures and services

This is a bit too much.

I'm not saying that Fortune 500 companies should use DO, but I don't see what's wrong with having a couple of servers for your 50k visitors per month web-app.

Sure "do-it-right" will be hard, but with DO you can literally buy yourself more time until you are able to hire the right people to do that.

> I don't see what's wrong with having a couple of servers for your 50k visitors per month web-app.

It's about context. Your 50k visitor a month webapp doesn't need a load balancer. When you actually need a load balancer, chances are you need someone to deal with myriad other reliability concerns as well.

Well, if you are using VMs and the site cannot afford to go down, then the 50k visitor site does need a cloud load balancer. Individuals VMs can fail anytime. These cloud load balancers can be a part of your HA solution, but yeah.. one also needs to make all other parts of the infrastructure HA (or at least HA-friendly) for this to work as expected, which is not necessarily a simple feat.

I hate being woken at night by emergency calls, so I would prefer to set everyone and everything up with HA solutions.

I have a 100k / month web app, and I have one $10 server for the app, and one for the db on DO. Based off the CPU + disk usage, I'd say I could grow 4x and stay with the same setup. It will be a lonngg time before I need a load balancer.

Either my workload is atypical or people massively overestimate the server power they need.

Except when either your app server or db server go down on a Friday night (sods law implies this will happen on a weekend you're heading out of town).

Now your 100k/month platform is down for 3 days.

You've saved something like $40/month (two additional app/db vms plus a load balancer), but a failure could realistically cost you 1/10th of a month of downtime (which, simplistically looks like $10k). Do you think your platform with two single points of failure is _not_ going to go down like this sometime in the next 5 years?

(Of course, a weekend's worth of downtime might not be a 10% revenue hit - but depending on your SLAs and penalties it's also possible to be a lot more than that...)

This is like driving around a $100k car with zero insurance. You might be the best driver in the world, but...

The type of companies that create horrible, unsustainable, overpriced, insecure and failing infrastructures and services using cloud systems will create even worse systems with an in house ops team.

I don't follow your logic.

You're suggesting that because a company without any experienced Ops staff makes bad decisions about their Infra, if the same company had experienced Ops staff, those staff would somehow make even worse decisions about their Infra?

The GPs logic is that your company either has experienced ops folks or it doesn't, and it will create shitty systems whether cloud-based or traditional if the case is the latter.

What dasil003 said. The type of company making terrible choices with a cloud provider will not hire and empower a good Ops team.

If the company doesn't hire a couple of experienced Ops people to manage their could infrastructure, why do you think they will do a good job hiring a team? You can shoot your self in the foot with any gun. However, one path requires many more choices, more people, and more process to get it right. If they can't get the simpler path right, than what makes you think they will be able to go down the more complex path?

For my experience, the average company with on premises servers with a Ops staff is horrible, unsustainable, overpriced, insecure and failing.

Your original post is extremely misleading with your apparent intent, to me.

Yeah, if you have no ops staff, and try to run your own servers you're going to do something terrible. No one is suggesting that.

Not everyone can afford a crack ops team. Not enough crack ops people exist to go around even if they could.

"The Cloud" lets us do WAY better on reliability and a little cheaper on cost than we could with our own bare metal servers.

I do wish I could drop a Samsung PRO 960 in our database VM though :(

> I do wish I could drop a Samsung PRO 960 in our database VM though :(

Consider Hetzner if you want high IO at low prices. You'll get "regular" SSDs in their VPSs, but you can get mirrored NVMe SSDs in their bare metal servers [1]. Nothing but great experiences with them. They have APIs to let you automate provisioning of the bare-metal servers if you want to tie it into a larger cloud deployment

[1] https://www.hetzner.de/us/hosting/produkte_rootserver/px61nv...

Completely agree. In the last couple of years there has been a push to say that ops are no longer needed and that a dev/engineer can do ops. I've found this to be a fallacy, and while true a developer might be able to use docker, or serverless, they usually have zero idea of the configuration used and no actual in-depth understanding of how these services work and communicate. Thus no idea how to scale or fix problems.

For the last 5 or 6 years, people have been waving their arms saying platforms as a service (Heroku, etc) or containers eliminate the need for servers. Howerver, the cloud server market has only ballooned in size. AWS has continued to explode with huge revenue numbers. Google Cloud is fasty maturing and a threat to Amazon. Azure is competing as well. Servers aren't going away.. Ops aren't going anywhere...

Anyway, I bring up this rant, because I just founded my third startup Elastic Byte (https://elasticbyte.net) which is a DevOps and cloud infrastructure management as a service. If anybody is looking for professional ops to manage their cloud infrastructure (AWS, GCP, DigitalOcean, Azure) I'd love to chat.

One of the things that used to separate "small business" from "big business" was reach. A small business was one that operated within the confines of a region, a town, or a city. They were characterized by being asymmetric in their execution, so they might be a shop with a great sales guy and lousy inventory control, or excellent quality and workmanship but poor management of expenses or receipts. If they were asymmetric enough, it would kill them and they would be one more business that was born, lived, and died.

Now however you can start a business in your basement that has global reach. It's still a small business, and it's still likely asymmetric in its ability to execute, but now it has high visibility. What is more it depends on network infrastructure in order to work.

Everyone wants to be the Microsoft Back Office version of "cloud". Install it, click the defaults, and it provides the infrastructure you need to run your small business.

I get your point, and attitude such as "lol who needs ops, developers can do it" can lead to really shitty situations indeed.

On the other hand, you have developers / small teams who are just trying to get things off the ground and want some basic redundancy and other benefits of load balancing. That's the primary audience of such service, in my opinion. Sure, you can just spin up HAProxy and even a very basic configuration might do the job. But it's one more thing to learn and maintain, often one more thing to train people to work with, etc. The same can be said about other popular managed services, including Amazon S3, RDS etc. It's a decision you have to make.

So I truly understand your position but I don't think that such sarcastic attitude actually helps anyone.

Im not sure if I want to hug you or sit in the corner with you and cry.

DigitalOcean peeps: a lot of cloud provider load balancers (specifically: AWS ELBs and Heroku's various HTTPS products) are currently really slow due to:

- No ECC cert support (slowing down initial connection time)

- No HTTP/2 (so no multiplexing, and text based protocol, slowing down fetching the actual page)

Do DO Load Balancers support these?

The new application ELB supports HTTP2. One can confirm by looking at the Chrome developer tools.


we actually had the new load balancers break an app when they added h2 support

there was some legacy code expecting headers to be all caps and amazon rewrites them to all lower case when they pass through the traffic

that was a fun one to figure out

> The new application ELB supports HTTP2.

Do you mean the new DO LBs or something else? The DO LBs are not referred to as 'elastic' / ELBs.

I've just set up a DO LB and it's HTTP 1.1 all the way.

Edit: have confirmed with Digital Ocean: no HTTP/2 support (there's HTTP/2 passthrough, but you can't terminate there).

They do support ECC certs though.

No, I meant ELB, as in AWS.

I am curious to hear what people use digital ocean for. It's great for running one-off servers and to run WordPress, ghost, lamp stack. But I can hardly imagine people using load balancers like in AWS. Does anyone have a usecase? Thanks.

Um… production environments?

Early on you probably don't need it for the load but for handling recovery if an application server goes down.

Prior to now if you wanted to run a passable production environment for a small application you'd probably do something like a single persistent store behind 2 application servers fronted by an nginx to balance load across the two.

Doing that requires nginx config and finding an off the shelf package for handling healthchecks and automated removal of failed items from the nginx load balance rotation. Now DO will do that for you at slightly above the cost of you rolling it yourself but with the benefit of being (probably) more reliable and requiring near-zero time investment.

Edit: Other threads have pointed out that keepalive on the VM works for recovery on single compute instances. Seems likely that this is for DO's larger client who need recovery and multiple computes with load balanced across said compute surface.

So, I am not talking about the load balancer feature itself. I understand what it does.

My question is if any of DO users want this today? If so, I am interested in knowing their use case (what kind of app requires a load balancer and nothing else). I don't want some imaginative work load, I want to a real example. For example, I cannot use it for my wp installs because the db does not scale without some work.

We use it for all our front-end infrastructure, AWS for all our backend infrastructure, the front-end devs prefer DO and can consume an API created by back end devs on AWS.

Can you talk about the additional latency between your front-page and backend by running in different cloud environments?

How do you deal with a availability zone failure of a cloud provider?

I couldn't tell you technically as I didn't build it, however, I use our app and it's very quick, I know that we keep our regions near each other (east east), front end can fail through nyc locations, we have no customers outside of the east coast either, so that helps as well. As far as back end, we are restricted to AWS GovCloud regardless.

I'm using it for failover setup ( 2 x db + 2 x app + 1 x lb ) for small traffic app.

For the LB I use haproxy server only, which has it's own healthcheck.

We use Digital Ocean at shopblocks.

We use it for both production, and staging, though we do offload our database to Google Cloud.

On DO we host our application servers (frontend and backend), as well as our caching, search and monitoring services. These are all load balanced behind Haproxy.

The apps you listed (+ other similar apps) are exactly what I use DO for. To me they are the KISS cloud provider. Nothing super fancy but the flip side to that is that they are easy to use and interface with. When I want an MVP or a non-enterprise type app I go to DO. If I want something more complex or something that needs more PaaS-like services (databases, app hosting, etc) I turn to the big three.

EDIT: This is coming from someone who works at MSFT and gets a bit of Azure for free. To me it's all about using the right tool for the job.

Azure's VMs was awesome till they decided to create the new portal and add in new features. Then they made networking insanely complicated.

But Azure is very pricy if you want to use it just for VMs.

Depending on your workloads, DO servers can come out cheaper or more expensive than AWS, but bandwidth at DO is so much cheaper than AWS that for bandwidth intensive stuff I can't serve entirely out of Europe (where Hetzner is vastly cheaper than DO again), DO is often a much cheaper alternative.

Sometimes we use it as a cost-cutting do-it-yourself CDN in front of AWS for clients that insist on S3 for storage (and again where we can't just cache everything in Europe for latency reasons). For bandwidth heavy applications, you can pay for significant numbers of Droplets from the AWS bandwidth savings alone.

Lack of load balancers have meant resorting to DNS based failover and hoping clients handle short TTLs (it works reasonably well, but with occasional issues), but the cost reductions are sufficient to make that a worthwhile tradeoff for many clients.

We run our entire stack on DO - a couple of machines each with a small cluster of containers and a load balancer, plus a VPN server, database servers, rabbitmq server, etc.. I find it a very painless way to do things.

If your DigitalOcean application suddenly became more popular and was performing poorly under load, why wouldn't you use this option? It would be a lot of work to move everything over to AWS.

Load balancer is not a magic bullet. It only expands the compute. Your DB, static pages, storage are all bottlenecks.

Don't get me wrong, I am just wondering if digital ocean wants to be AWS.

> Load balancer is not a magic bullet. It only expands the compute. Your DB, static pages, storage are all bottlenecks.

I think we can make the generous assumption that you've profiled and decided that compute is the problem and not static pages or DB. If that were the case, why would you not use DO load balancing?

> I am just wondering if digital ocean wants to be AWS.

No, they want to be better than AWS. They absolutely want that AWS $$$. Selling to devs who create "MVPs" and prototypes might pay the bills for now but if they are going to grow into their valuation they need larger clients. Seems likely that large clients wanted load balancing.

We're building a specialized Rails web app that gets deployed, customized and possibly extended (via additional engines) per tenant. Having a single VM per customer is just so KISS, simplifying every step of the process, billing, security, data isolation, load management... Any other solution would become ludicrously complex in comparison, for no reason at all, buzzwords be damned.

What does this have to do with DO specifically? Any cloud provider can give you as many VMs as you want.

Services like Cloud66 let you target many different cloud providers, so the complexity of apps you deploy doesn't need to be any different on DO than AWS.

When you attach a load balancer, in AWS, it uses ELB; in DO it spins up an HAProxy instance. There are likely advantages to having parity between the stacks, having the load balancer higher in the stack.


For me somehow, the load balancer nevers puts a server back again in rotation even though ssh and http directly access works. Need to boot up the ec2 and then it comes back in rotation.

DO is cheap, you can achieve scaling and failover for a lower fee than with AWS. Their new load balancing solution is nice but lacks features and its price is 3x higher than what we provide at https://pikacloud.com.

This is a short term illusion, with instance reservations and the wider variety of storage options available, AWS can work out cheaper even for small accounts.

Source: spent a several days in December cost modelling a few services for a client, was surprised by the result

I have been under the impression that structuring a system to use AWS with portability in mind ends up costing as much or more than optimizing the structure for AWS such that one is effectively locked-in at scale. That's a trade-off that has to be factored in to a decision.

I mean granted I've only done rough guesstimates for some toy applications for myself and some friends and family, so I could be totally off base here.

you can run this yourself with ngnix but if you have everything inside DO then the ability to easily add new droplets is great. create droplets on demand from snapshots, add them to load balance and you are ready.

> Pay as you use. Hourly rate, monthly billing.

Nah. I pay monthly, include monthly pricing. I can probably set up an nginx/varnish instance faster than I can calculate the monthly cost when you're billing by the hour.

You aren't the audience, then. These offerings are for people who don't want to manage an nginx/varnish stack.

One thing that is nice is the automatic failover (of the routing stack) when a VM or datacenter goes down.

I'm not personally no, the company I work for is - that's why I looked.

Essentially what I'm after is Digitalocean's load balancing per-domain rather than per-infrastructure

If it was my own stuff I'd just do it myself, work can afford the premium if it includes a support team when things break

Billing is based on usage for load balancers. You can't offer fixed monthly fee as users may consume all of your network or system capacity and pay as if they were using nothing. This is not a sustainable business.

I'm only really after a ballpark figure, multiply everything by 744 and say it's estimated. Put a price per gigabyte overage underneath

DO's offering is a fixed monthly rate.

They're $0.03 per hour [0].

[0] https://www.digitalocean.com/pricing/

Sorry I meant it's not billed by network usage, unlike the parent was saying is necessary.

It also says right on that page "$20 per month"

I just run my VPN server there since they charge by hour. So I create my VPN on DO whenever I need it and destroy it automatically after a while. But for everything else I have a dedicated server at Hetzner.

In a word; redundancy.

It is very interesting to see how DO is expanding its portfolio slowly and steadily. Does anybody have a relatively large-scale (>50k users) mildly mission-critical applications running on DO? Can you share you experience with existing services?

100-150K here. DO has been very good for us, with some in house auto-scaling, the bill goes as low as $250, uptime has been in the %99.9.

2xLB + frontend servers (from 1 to N) + Postgres (master + multiple read slaves depending on frontned servers) + elasticsearch + redis + image servers.

Only thing that'd probably move us off DO is GCP adding Postgres to their Cloud SQL offering.

But overall, very happy.

Wow, sounds good. Which DB would be cheaper - a managed one or a self-hosted one?

I have been planning to use DO as my personal test-bed (for anything and everything).

Cheaper, definitely self-hosted, but as with everything, you get what you pay for, you'll have to take care of fail-over and backups on your own.

Start with a small instance on DO, enable backups (for a small fee DO will create weekly backups of your instance), as your project grows, tune PgSQL and resize the instance up, add a slave, weekly backups of the instance might not be enough at this point, so also pg_dump the database at a more frequent time interval (1/day), send the backup to 1 or 2 offsite stores (S3/other remote server/etc).

Managed, comes with it's own problems as well, but at least you can blame someone else when it breaks :).

We have such an application running on DO, 100k visitors users a month. We have a big application server running and other servers for DB (postgres and redis) and static files (which is basically a nginx mirror).

So far, we are satisfied. Over the last year, there were 4 out times which lasted 30min to 1h caused by DO, which is alright I guess.

Since we experience more traffic peaks in the last time, we may use their load balancers in the future. The application servers are not the problem though, more the DB server. This is more a pain, since setting up and maintaining a DB cluster is quite a lot of work. We might go to AWS for this.

TL;DR DO works for larger projects, databases are bit of a pain though

I think the DO Load Balancers won't help with your DB operational concern. You'll have to use some other in-house or outsourced solution.

If you switch to AWS, will you be maintaining a cross-datacenter VPN connection or something?

Yeah the load balancer was just intended for the application servers. Running a hot DB secondary with just read accesses is possible by setting up manually, but tends to require a lot of maintenance work during updates in our experience.

To be honest, we have not figured out how to connect the DO servers to AWS yet. Do you have experience with that?

Having a cross-datacenter VPN is one way. But I am not sure about how bad your latencies will end up. Most likely they won't be performant especially in the case of an ACID compliant DB.

We outsourced our database to Google Cloud after having pains with percona on DO with clustering. Might be worth looking at that. No regrets for us so far, except that restoring backups is painful if you want 1 out of many databases restored.

We have a 50K+ users/month running on DO. But I think you meant 50k/day. Anyways, no real issues so far. Excellent service to boot.

Edit - we might actually be using the load balancer pretty soon as we prepare to scale.

Well, I guess still a fair number. Good to hear it has been working out well.

I don't know if DO plans to provide a managed Kubernetes offering similar to GKE, but if they did I would use it.

having a Load Balancer is necessary to integrate with k8s similarly to GCLB and aws, so this is a step in the right direction.

We've been using Kubernetes internally for quite some time and replacing a few of our older and more difficult to manage systems with it. We are looking into productionizing kubernetes so that our customers can also use kubernetes if they like and making that experience seamless.

It's something that we are actively working on in 2017 but it's still too early to give much more guidance.

But certainly if you wanted to go through the process of setting up your own kubernetes cluser on DO you could do so. =]

I have some question about your loadbalancer and your plans about kube on DO:

- Does your loadbalancer support HTTP/2 yet?

- Can you share your scripts for setting up a HA kube cluster on DO?

- Do you plan to provide a kubernetes cloud provider for DO?

Having a DO cloud provider and standard scripts would probably help the adoption of kube on DO. Without a cloud provider I can't many benefits compared to traditional bare metal providers which are still cheaper.

Our goal would be to provide a complete product solution so you would just be able to login and create a cluster. Unfortunately our current implementation that we are running behind the scenes wouldn't be directly portable to our end customers and we wouldn't want users to go through the hassle of running scripts to spin up their cluster.

we are hosting a meetup tomorrow in our NYC HQ but we also stream remotely if you are interesting in hearing more from our of engineering managers and you can ask him questions directly =]


I don't know what your load balancer architecture is, but It's disappointing that using a load balancer has to cost $20/month, which is fine when you need it, but drastically increases your costs when you don't.

It would be really nice to see someone offer a solution that can actually scale from a single node to multiple nodes as necessary. Being able to run my application on the load balancer inside a container would be pretty nice.

Kontena (https://www.kontena.io) has pretty nice integration to DO.

GCP also provides globally distributed load balancers which reduces the overhead to deploy is multiple regions.

Could someone more knowledgeable please provide some insight into why using their load balancers would be preferred over, say having regular Droplet instances act as load balancers, by leveraging nginx for instance?

Single droplets are a single point of failure and have a limited availability. You can take actions to mitigate this SPOF like running multiple droplets and distributing requests by adding all of these droplets to your DNS configuration also known as DNS round robin.

Another solution is too pay Digital Ocean to provide network endpoints that are more available and allow you to provide a higher availability for your application. Because DO works on different level of abstraction they have more possibilities to provide this availability.

eicnix says here "Droplets are a single-point-of-failure".

I don't see any direct explanation as to how these LBs aren't. It's limited to a single region for backend droplets, and who knows how fragile they are.

Putting up a droplet with caddy/nginx/apache and acting like a reverse proxy, depending on your usecase, could be 75% cheaper than this solution offered.

But, if that Droplet goes down, so does the reverse proxy. You'd need at least two Droplets to ensure no SPOF.

Bandwidth Bandwidth Bandwidth.

Linode's load balancers have clearly defined support for 10,000 concurrent connections. Am I missing where the limits are on DO's?

No, you are not alone. I can't find any performance, tech specs, concurrent connection limits, etc about DO load balancers.

Does anybody know what they're powering this service with? I presume that it's software-based: nginx, haproxy, traefik, or similar.

Load balancers is nice, but per-client private networking would be better. Just a thought.

That's currently in development for 2017, it's towards the end of this year. As we make more progress that timeline will get tightened up. But it is something we are actively working on and it was kicked off in the tail end of 2016.

I'm surprised this wasn't upvoted higher. Having multiple servers without the ability to connect them in a secure network doesn't really make sense. Ofc you can DIY with VPNs but it is really a service that a cloud host should provide.

I was wondering recently if you could use Wireguard VPN networking to overlay your own encrypted network at Digital Ocean.

I'm using tinc for that. It's much less hassle compared to securing every single connection between your hosts.

Looking forward to not care about this anymore. Secure private networking out of the box is now all that's left for me to stay on DO.

Can you elaborate on your comment? Do D.O. instances all share the same vlan?

Yes, they do.

What?! Is that made public anywhere? I have not heard that before. Why don't they have an overlay? I would think this is a non-starter for many(most?) businesses.

I think it's pretty clearly explained in DigitalOcean's docs. They have a public interface and a private interface. The private interface is not Internet routable but is routable by all other instances in the datacenter. I agree that it is a non-starter for me, but it's definitely public.

Huh, I'm impressed DO have managed this long without them. I can see a lot of use cases for this, especially in the failover use case.

I was looking forward to this, but the price point is too high, I could run VMs that use keepalive or similar for cheaper.

For failure recovery on a single compute that'll work but doesn't that story get more complex as you add more compute VMs? You know, the actual… ah… balancing of load?

Well I could just add two compute nodes running IPVS and keepalived and get the same thing no? I think that was the OPs point. It seems like that might be far cheaper that this offering.

I get it and I think for failover it doesn't make sense to use this feature. If you want one feature of load balancing, sure, roll that yourself. What happens as you want each additional feature, incrementally getting closer to rolling your own load balancing instead of just flipping a switch to enable it for $20/month?

My point was just that the complexity goes up as you want more. At some point, it's worth the money to just use someone else's solution to all of those small problems instead of maintaining your own worse version. Most small teams likely shouldn't be writing their own nginx, haproxy and keepalive configuration for load balancing. The $20-$200/month that it'll cost them is well worth the time that their engineers back.

Of course, there does exist another tier of scale where it becomes questionable to continue paying the premium that comes from asking others to solve your infra problems for you. This falls into the "problems we'd love to be lucky enough to have" bucket for most companies.

And linode introduces high memory instances. Today is a good day for all of us!

Does anyone know why egress bandwidth on Google's cloud is so expensive compared to Digital Ocean?

Curious how these handle high traffic. If no pre-warming is needed and it can handle higher traffic spikes, I can see this being a potential alternative to AWS ELB.

Since the topic is relevant, I was disappointed with how basic Amazon ELB is.

Many features such as weighted or IP-based routing are missing.

I know it's possible to achieve that with other options like Route53 or running your own load balancers behind ELB but for my basic needs and projects that's too much cost and complexity.

I just want a "load balancer as a service" that has a decent feature-set.

DigitalOcean seems to be going after Linode. Will be interesting to see how Linode responds.

I was under the impression that Digital Ocean was mostly lower end i.e the $5.00 or $10.00 a month instances. Can someone say what is the use case for a load balancer in front of small instances?

Is Digital Ocean now targeting larger customers now?

We're focused on delivering a simplified cloud experience, if you want to run a bunch of $5 and $10 droplets then by all means go for it =]

But we're serious about building a feature rich cloud so that you can run all of your production systems from DO.

We run DO from DO itself, so as we continue to scale we will continue to release products and features that will make it easier to do so, both for ourselves and for our customers.

The biggest advantage even for low-end services would be high-availability. If one server goes down you can automatically fail over to another without having to build that into your code.

Does someone running a wordpress site need HA though? You could also just buy a pingdom account and be notified when there is a problem and resolve it via support as needed. Adding an LB tier is adding a layer of complexity and different failure modes. That's why I was asking if maybe they were targeting larger customers now.

I think you're a little off on the idea that DO only hosts unimportant Wordpress sites, or that Wordpress could never be used for something that needs high availability.

As of 2014 Digital Ocean was the third largest hosting provider, so they're not a mom-and-pop operation.

I never said or implied that the company was a mom-and-pop operation. I know they are a large company.

Being the 3rd largest hosting provider however doesn't say anything about what type and size of customers they have.

Who are some the larger customers that run their businesses on D.O infra?

The only names I recognized on there was Influx and Compose.

Influx actually looks like they use AWS for their cloud offering now:


and Compose deploys to AWS, Google, Softlayer and only to D.O for MongoDB classic whatever that is:


Digital Ocean supports droplets up to 224GB of RAM, 32 virtual CPUs, 500GB of SSD storage for $1680 a month in their High Memory variants. So there are likely some folks running larger apps across multiple virtual hosts.

I would be curios to hear what the distribution of those sized instances vs the $5.00 droplets is.

I don't know if it's a lot harder, but it seems the usefulness of this is severely diminished without the ability to autoscale your backend.

I like Digital Ocean solutions, but I am currently using Nginx under a Digital Ocean Droplet, for USD 5 month, and it's working very well.

Honestly, this is awesome. It looks a lot easier than AMazon's EC2 load balancer to use, and fits right into their whole philosophy.

No mention of HTTP/2 and Websockets support though.

If you set it to TCP instead of HTTP/HTTPS, HTTP/2 and websockets will work.

How much concurrent visitors does it support?

No ipv6?

Congratulations guys! Welcome to 1995.

Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact