On a serious note, you are absolutely right. IT always had two opposing forces for a reason, it provided balance between change and sustainability. The big problem was lack of communication between the two sides, which "devops" was supposed to solve.
Instead, "devops" is now developers doing what they've always done, and caring for change above all like they always did, but pretending to care about the needs of services in production. I cringe when I think about all those containers where the application is continuously delivered but the bundled openssl isn't updated when vulnerabilities are found. Welcome to the brave new world.
We're moving in a no-ops direction mainly because the most vocal folks come from startups that don't last enough to see where coherent operations matter. They go under well before that. But this idea is bleeding onto companies where it does matter, and we'll see how that goes in a few years.
I think/hope we might see this change soon. Currently there's no real penalty for poor ops. If your customer data gets hacked, there are very few actual penalties, most of the pain is in reputation. I feel like there's been a lot of pressure to have government penalties for poor security practices, especially when so many companies "need" all the user data they can get their hands on.
You say that like it's a bad thing. If you want DevOps to be successful, you don't hire a "DevOps team". You hire a team of devs to make ops tools that are so easy to use that all the other devs can manage their own ops.
The idea is that doing "good ops" is so easy that everyone in the company does it.
In my experience, "good ops" people are the ones with lots of war stories to tell of disasters or narrowly averted disasters either of their own of of friend/colleagues they hang out with. It's a profession who's lessons seem to be best-taught by spectacular failures and heroic recovery efforts, rather than college courses, vendor/consultant training, or "industry best practices" documents...
We used to have people to make those ops tools, we called them ops. The whole devops things seems to be based on the false notion that ops didn't automate their work.
This is a bit too much.
I'm not saying that Fortune 500 companies should use DO, but I don't see what's wrong with having a couple of servers for your 50k visitors per month web-app.
Sure "do-it-right" will be hard, but with DO you can literally buy yourself more time until you are able to hire the right people to do that.
It's about context. Your 50k visitor a month webapp doesn't need a load balancer. When you actually need a load balancer, chances are you need someone to deal with myriad other reliability concerns as well.
I hate being woken at night by emergency calls, so I would prefer to set everyone and everything up with HA solutions.
Either my workload is atypical or people massively overestimate the server power they need.
Now your 100k/month platform is down for 3 days.
You've saved something like $40/month (two additional app/db vms plus a load balancer), but a failure could realistically cost you 1/10th of a month of downtime (which, simplistically looks like $10k). Do you think your platform with two single points of failure is _not_ going to go down like this sometime in the next 5 years?
(Of course, a weekend's worth of downtime might not be a 10% revenue hit - but depending on your SLAs and penalties it's also possible to be a lot more than that...)
You're suggesting that because a company without any experienced Ops staff makes bad decisions about their Infra, if the same company had experienced Ops staff, those staff would somehow make even worse decisions about their Infra?
If the company doesn't hire a couple of experienced Ops people to manage their could infrastructure, why do you think they will do a good job hiring a team? You can shoot your self in the foot with any gun. However, one path requires many more choices, more people, and more process to get it right. If they can't get the simpler path right, than what makes you think they will be able to go down the more complex path?
For my experience, the average company with on premises servers with a Ops staff is horrible, unsustainable, overpriced, insecure and failing.
Yeah, if you have no ops staff, and try to run your own servers you're going to do something terrible. No one is suggesting that.
"The Cloud" lets us do WAY better on reliability and a little cheaper on cost than we could with our own bare metal servers.
I do wish I could drop a Samsung PRO 960 in our database VM though :(
Consider Hetzner if you want high IO at low prices. You'll get "regular" SSDs in their VPSs, but you can get mirrored NVMe SSDs in their bare metal servers . Nothing but great experiences with them. They have APIs to let you automate provisioning of the bare-metal servers if you want to tie it into a larger cloud deployment
For the last 5 or 6 years, people have been waving their arms saying platforms as a service (Heroku, etc) or containers eliminate the need for servers. Howerver, the cloud server market has only ballooned in size. AWS has continued to explode with huge revenue numbers. Google Cloud is fasty maturing and a threat to Amazon. Azure is competing as well. Servers aren't going away.. Ops aren't going anywhere...
Anyway, I bring up this rant, because I just founded my third startup Elastic Byte (https://elasticbyte.net) which is a DevOps and cloud infrastructure management as a service. If anybody is looking for professional ops to manage their cloud infrastructure (AWS, GCP, DigitalOcean, Azure) I'd love to chat.
Now however you can start a business in your basement that has global reach. It's still a small business, and it's still likely asymmetric in its ability to execute, but now it has high visibility. What is more it depends on network infrastructure in order to work.
Everyone wants to be the Microsoft Back Office version of "cloud". Install it, click the defaults, and it provides the infrastructure you need to run your small business.
On the other hand, you have developers / small teams who are just trying to get things off the ground and want some basic redundancy and other benefits of load balancing. That's the primary audience of such service, in my opinion.
Sure, you can just spin up HAProxy and even a very basic configuration might do the job. But it's one more thing to learn and maintain, often one more thing to train people to work with, etc. The same can be said about other popular managed services, including Amazon S3, RDS etc. It's a decision you have to make.
So I truly understand your position but I don't think that such sarcastic attitude actually helps anyone.
- No ECC cert support (slowing down initial connection time)
- No HTTP/2 (so no multiplexing, and text based protocol, slowing down fetching the actual page)
Do DO Load Balancers support these?
there was some legacy code expecting headers to be all caps and amazon rewrites them to all lower case when they pass through the traffic
that was a fun one to figure out
Do you mean the new DO LBs or something else? The DO LBs are not referred to as 'elastic' / ELBs.
I've just set up a DO LB and it's HTTP 1.1 all the way.
Edit: have confirmed with Digital Ocean: no HTTP/2 support (there's HTTP/2 passthrough, but you can't terminate there).
They do support ECC certs though.
Early on you probably don't need it for the load but for handling recovery if an application server goes down.
Prior to now if you wanted to run a passable production environment for a small application you'd probably do something like a single persistent store behind 2 application servers fronted by an nginx to balance load across the two.
Doing that requires nginx config and finding an off the shelf package for handling healthchecks and automated removal of failed items from the nginx load balance rotation. Now DO will do that for you at slightly above the cost of you rolling it yourself but with the benefit of being (probably) more reliable and requiring near-zero time investment.
Edit: Other threads have pointed out that keepalive on the VM works for recovery on single compute instances. Seems likely that this is for DO's larger client who need recovery and multiple computes with load balanced across said compute surface.
My question is if any of DO users want this today? If so, I am interested in knowing their use case (what kind of app requires a load balancer and nothing else). I don't want some imaginative work load, I want to a real example. For example, I cannot use it for my wp installs because the db does not scale without some work.
How do you deal with a availability zone failure of a cloud provider?
For the LB I use haproxy server only, which has it's own healthcheck.
We use it for both production, and staging, though we do offload our database to Google Cloud.
On DO we host our application servers (frontend and backend), as well as our caching, search and monitoring services. These are all load balanced behind Haproxy.
EDIT: This is coming from someone who works at MSFT and gets a bit of Azure for free. To me it's all about using the right tool for the job.
But Azure is very pricy if you want to use it just for VMs.
Sometimes we use it as a cost-cutting do-it-yourself CDN in front of AWS for clients that insist on S3 for storage (and again where we can't just cache everything in Europe for latency reasons). For bandwidth heavy applications, you can pay for significant numbers of Droplets from the AWS bandwidth savings alone.
Lack of load balancers have meant resorting to DNS based failover and hoping clients handle short TTLs (it works reasonably well, but with occasional issues), but the cost reductions are sufficient to make that a worthwhile tradeoff for many clients.
Don't get me wrong, I am just wondering if digital ocean wants to be AWS.
I think we can make the generous assumption that you've profiled and decided that compute is the problem and not static pages or DB. If that were the case, why would you not use DO load balancing?
> I am just wondering if digital ocean wants to be AWS.
No, they want to be better than AWS. They absolutely want that AWS $$$. Selling to devs who create "MVPs" and prototypes might pay the bills for now but if they are going to grow into their valuation they need larger clients. Seems likely that large clients wanted load balancing.
When you attach a load balancer, in AWS, it uses ELB; in DO it spins up an HAProxy instance. There are likely advantages to having parity between the stacks, having the load balancer higher in the stack.
For me somehow, the load balancer nevers puts a server back again in rotation even though ssh and http directly access works. Need to boot up the ec2 and then it comes back in rotation.
Source: spent a several days in December cost modelling a few services for a client, was surprised by the result
I mean granted I've only done rough guesstimates for some toy applications for myself and some friends and family, so I could be totally off base here.
Nah. I pay monthly, include monthly pricing. I can probably set up an nginx/varnish instance faster than I can calculate the monthly cost when you're billing by the hour.
One thing that is nice is the automatic failover (of the routing stack) when a VM or datacenter goes down.
Essentially what I'm after is Digitalocean's load balancing per-domain rather than per-infrastructure
If it was my own stuff I'd just do it myself, work can afford the premium if it includes a support team when things break
2xLB + frontend servers (from 1 to N) + Postgres (master + multiple read slaves depending on frontned servers) + elasticsearch + redis + image servers.
Only thing that'd probably move us off DO is GCP adding Postgres to their Cloud SQL offering.
But overall, very happy.
I have been planning to use DO as my personal test-bed (for anything and everything).
Start with a small instance on DO, enable backups (for a small fee DO will create weekly backups of your instance), as your project grows, tune PgSQL and resize the instance up, add a slave, weekly backups of the instance might not be enough at this point, so also pg_dump the database at a more frequent time interval (1/day), send the backup to 1 or 2 offsite stores (S3/other remote server/etc).
Managed, comes with it's own problems as well, but at least you can blame someone else when it breaks :).
So far, we are satisfied. Over the last year, there were 4 out times which lasted 30min to 1h caused by DO, which is alright I guess.
Since we experience more traffic peaks in the last time, we may use their load balancers in the future. The application servers are not the problem though, more the DB server. This is more a pain, since setting up and maintaining a DB cluster is quite a lot of work. We might go to AWS for this.
TL;DR DO works for larger projects, databases are bit of a pain though
If you switch to AWS, will you be maintaining a cross-datacenter VPN connection or something?
To be honest, we have not figured out how to connect the DO servers to AWS yet. Do you have experience with that?
Edit - we might actually be using the load balancer pretty soon as we prepare to scale.
having a Load Balancer is necessary to integrate with k8s similarly to GCLB and aws, so this is a step in the right direction.
It's something that we are actively working on in 2017 but it's still too early to give much more guidance.
But certainly if you wanted to go through the process of setting up your own kubernetes cluser on DO you could do so. =]
- Does your loadbalancer support HTTP/2 yet?
- Can you share your scripts for setting up a HA kube cluster on DO?
- Do you plan to provide a kubernetes cloud provider for DO?
Having a DO cloud provider and standard scripts would probably help the adoption of kube on DO. Without a cloud provider I can't many benefits compared to traditional bare metal providers which are still cheaper.
we are hosting a meetup tomorrow in our NYC HQ but we also stream remotely if you are interesting in hearing more from our of engineering managers and you can ask him questions directly =]
It would be really nice to see someone offer a solution that can actually scale from a single node to multiple nodes as necessary. Being able to run my application on the load balancer inside a container would be pretty nice.
Another solution is too pay Digital Ocean to provide network endpoints that are more available and allow you to provide a higher availability for your application. Because DO works on different level of abstraction they have more possibilities to provide this availability.
I don't see any direct explanation as to how these LBs aren't. It's limited to a single region for backend droplets, and who knows how fragile they are.
Putting up a droplet with caddy/nginx/apache and acting like a reverse proxy, depending on your usecase, could be 75% cheaper than this solution offered.
Looking forward to not care about this anymore. Secure private networking out of the box is now all that's left for me to stay on DO.
My point was just that the complexity goes up as you want more. At some point, it's worth the money to just use someone else's solution to all of those small problems instead of maintaining your own worse version. Most small teams likely shouldn't be writing their own nginx, haproxy and keepalive configuration for load balancing. The $20-$200/month that it'll cost them is well worth the time that their engineers back.
Of course, there does exist another tier of scale where it becomes questionable to continue paying the premium that comes from asking others to solve your infra problems for you. This falls into the "problems we'd love to be lucky enough to have" bucket for most companies.
Many features such as weighted or IP-based routing are missing.
I know it's possible to achieve that with other options like Route53 or running your own load balancers behind ELB but for my basic needs and projects that's too much cost and complexity.
I just want a "load balancer as a service" that has a decent feature-set.
Is Digital Ocean now targeting larger customers now?
But we're serious about building a feature rich cloud so that you can run all of your production systems from DO.
We run DO from DO itself, so as we continue to scale we will continue to release products and features that will make it easier to do so, both for ourselves and for our customers.
As of 2014 Digital Ocean was the third largest hosting provider, so they're not a mom-and-pop operation.
Being the 3rd largest hosting provider however doesn't say anything about what type and size of customers they have.
Who are some the larger customers that run their businesses on D.O infra?
Influx actually looks like they use AWS for their cloud offering now:
and Compose deploys to AWS, Google, Softlayer and only to D.O for MongoDB classic whatever that is: