Even for the highest scale app I've worked on (which was something like 20 requests per second, not silicon valley insane but more than average), we got by perfectly fine with 3 web servers behind a load balancer, hooked up to a hot-failover RDS instance. And we had 100% uptime in 3 years.
I feel things like Packer (allowing for deterministic construction of your server) and Terraform are a lot more necessary at any scale for generally good hygiene and disaster recovery.
The first “service mesh” I ever did was just nginx as a forward proxy on dev boxes, so we could reroute a few endpoints to new code for debugging purposes. And the first time I ever heard of Consul was in the context of automatically updating nginx upstreams for servers coming and going.
There is someone at work trying to finish up a large raft of work, and if I hadn’t had my wires crossed about a certain feature set being in nginx versus nginx Plus, I probably would have stopped the whole thing and suggested we just use nginx for it.
I think I have said this at work a few times but might have here as well: if nginx or haproxy could natively talk to Consul for upstream data, I’m not sure how much of this other stuff would have ever been necessary. And I kind of feel like Hashicorp missed a big opportunity there. Their DNS solution, while interesting, doesn’t compose well with other things, like putting a cache between your web server and the services.
I think we tried to use that DNS solution a while back and found that the DNS lookups were adding a few milliseconds to each call. Which might not sound like much except we have some endpoints that average 10ms. And with fanout, those milliseconds start to pile up.
To be fair, half of the API Gateways and edge router projects out there are basically nginx with a custom consul-like service bolted on.
Here's a post I wrote on that ~4 years ago that uses an in-process cache . It'd be fairly easy to add an endpoint to update it and pull data from Consul. I agree with you, it's a missed opportunity - there are alternatives, but being able to rely on a battletested server like nginx makes a difference.
It appears that if the consul client has the right permissions it can restart the nginx service after editing the configuration file. It uses the consul templating engine to generate an nginx config file.
I haven't tried it myself but it looks promising.
Airbnb's Smartstack works well for this. It's not built in to nginx as a module, but I think it's more composeable this way.
Blog post: https://medium.com/airbnb-engineering/smartstack-service-dis...
The two main components are nerve (health checking of services + writing a "I'm here and healthy" znode into zookeeper, https://github.com/airbnb/nerve) and synapse (subscribes to subtrees in zookeeper, updates nginx/haproxy/whatever configs with backend changes, and gracefully restarts the proxy, https://github.com/airbnb/synapse).
It's fairly pluggable too if you don't want to use haproxy/nginx.
I like what Caddy is doing, exposing their entire configuration through a REST interface.
I 100% agree with you, I've been using Consul for four years now to run 100s of services in 1000s of VMs across datacenters distributed globally and not once I saw the need for anything else...
Maybe I just don't have the scale to find service mesh or kubernetes interesting. Nomad however is something I am willing to give a go for stateless workflows that I would usually provision a VM running a single docker container for.
under the load point of view, yes. absolutely. no doubt.
under the speed of action, no way. if your k8s cluster is properly managed, you can let developers do most of the operations work themselves, confined into their namespaces, touching only the kind of resources that you tell them to touch.
The few milliseconds that you get though, most likely is due to your local machine not having DNS caching configured, this is quite common in Linux. Because of that every connection triggers a request to DNS server. You can install unbound for example to do it. nscd or sssd can also be configured to do some caching.
It was designed for that but the SRV record requires protocols and their clients to explicitly support it. You can argue that this an unreasonable design choice but load balancers like HAproxy do support SRV records.
I'm saying it is not good idea to use DNS for service discovery, there's a way of using it correctly, but it requires software to do the name resolution with service discovery in mind, and you're guaranteed that majority of your software doesn't work that way.
Why you shouldn't use DNS? It's because when you communicate over TCP/IP you need an address that's really the only thing you actually need.
If you use DNS for discovery you probably will set low TTL for the records, because you want to update them quickly, this means for every connection you make you will be checking DNS server providing extra load on the DNS server and adding latency when connecting.
On failure of a DNS server, even if you set a large TTL, you will see an immediate failure on your nodes the reason is that's how DNS cache works. Different clients made the DNS request at different time so the records will expire at different times. If you did not configure a local DNS cache on your hosts (most people don't) then you won't even cache the response and every connection request will go to a DNS server, so upon a failure everything is immediately down.
Compare this to have a service that edits (let say an HAProxy) configuration and populates it with IP addresses. If the source that provides the information goes down, you simply won't have updates during the time, but the HAProxy will continue forwarding requests to IPs (if you use IPs instead of hostnames, then you also won't be affected by DNS outages).
Now there are exceptions to this, certain software (mainly load balancers such as pgbouncer (I think HAProxy also added some dynamic name resolution)) use DNS with those limitations in mind. They basically query DNS service on the start to get IP and then periodically query it for changes, if there's a change it is being applied, if the DNS service is down they will keep the old values.
Since they don't throw away the IPs when a record expires, you don't have this kind of issues. Having said that, majority of software will use system resolver the way DNS was designed to work and will have these issues, and if you use DNS for service discovery, you, or someone in your company will use it with such service and you'll have the issues described above.
Just edit the hosts file? If you have access to machines that run your code and can edit configuration, and also don't want the downsides of resolvers (pull-based instead of push-based updates, TTLs), DNS still seems like a better idea than some new stacks, plus you can push hosts files easily via ssh/ansible/basically any configuration management software
EDIT: The only issue I see with DNS as service discovery is that you can't specify ports. But usually software should use standard ports for their uses and that's never been a problem in my experience.
Perhaps the network infrastructure team always scaled it correctly behind the scenes but they never once complained about the amount of DNS queries.
If you have hosts on public cloud and use DNS server that is also shared with others the latency typically might be bigger and on high number of requests you might also start seeing SERVFAIL on large number of requests.
I can't find the forum post anymore, but people who had applications that were opening large number of connections (bad design of the app imo, but still) had huge performance degradation when they moved from c4 to c5 instances. It turned out that this was because of the move from Xen to Nitro (based on KVM).
Side effect of using Xen was that the VM Host was actually caching DNS requests by itself, from which all guests benefited. In the KVM, all DNS requests were going directly to the DNS server.
Don't resolve DNS inline rather on every DNS update, resolve it and insert new IP addresses.
Caching those values for very long subverts the point of the feature.
Prepared queries  and network tomography (which comes from the Serf underpinnings of the non-server agents)  allow for a much wider range of topologies just using DNS without requiring proxies (assuming well behaved client software, which is not a given by any stretch).
Furthermore, Consul _does_ have a mesh as around 2 years ago - .
You are correct though that long caches subvert much of the benefit.
Round-robin balancing using DNS towards a small cluster is silly - you know when any new instance is added to the pool or removed from a pool, so why not push that load balancing onto the load balancer which in your case is nginx?
Consul itself advertises DNS resolution for service discovery.
Whatever is the technology that you use to register the active backends in the DNS, rather than doing name => ip address lookup per request, you can resolve all those names => ip address maps upon the service being brought up/taken down and push the resolved map as a set of backends into nginx config, thus removing the need to query DNS per request.
If you don't need those problems solved then it's not going to benefit you a whole lot.
Of course if you are using docker already and are following best practices with containers then converting to Kubernetes really isn't that hard. So if you do end up needing more problems solved then you are willing to tackle on your own then switching over is going to be on the table.
The way I think about it is if you are struggling to deploy and manage the life cycle of your applications... fail overs, rolling updates, and you think you need some sort of session management like supervisord or something like to manage a cloud of processes and you find yourself trying to install and manage applications and services developed by third parties....
Then probably looking at Kubernetes is a good idea. Let K8s be your session manager, etc.
I've seen too many full-time employees eaten up by underestimating what it takes to deploy and maintain a kubernetes cluster. Their time would have been far better spent on other things.
You don't always find open source programs that have dedicated so much effort to security, monitoring, governance etc. And doing so in a very professional and methodical way.
K8s for us provides a nice, well-documented abstraction over these problems. For sure, there was definitely a learning curve and non-trivial setup time. Could we have done everything without it? Perhaps. But it has had its benefits - for example, being able to spin up new isolated testing environments within a few minutes with just a few lines of code.
You don't. These are complementary tools.
Packer builds images. Salt, Ansible, Puppet or Chef _could_ be used as part of this process, but so can shell scripts (and given the immutability of images in modern workflows, they are the best option).
Terraform can be used to deploy images as virtual machines, along with the supporting resources in the target deployment environment.
I don't see the point of your post, and frankly sounds like nitpicking.
Ansible is a tool designed to execute scripts remotely through ssh on a collection of servers, and makes the job of writing those scripts trivially easy by a) offering a DSL to write those scripts as a workflow of idempotent operations, and b) offer a myriad of predefined tasks that you can simply add to your scripts and reuse.
Sure, you can write shell scripts to do the same thing. But that's a far lower level solution to a common problem, and one that is far hardsr and requires far more man*hours to implement and maintain.
With Ansible you only need to write a list of servers, ensure you can ssh into them, and write a high-level description of your workflow as idempotent tasks. It takes you literally a couple of minutes to pull this off. How much time would you take to do the same with your shell scripts?
You can get a lot done with a sailboat. For certain kinds of problems you might genuinely need an aircraft carrier. But then you’d better have a navy. Don’t just wander onto the bridge and start pressing buttons.
However, a lot of new (or just bad) devs miss the whole Keep It Simple Stupid concept and think that they NEED Kubernetes-shaped solutions in order to "do it the right way".
Many times three web servers and a load balancer are exactly what you need.
Suddenly you have gone from 3 instances to 20.
All of that is irrelevant to my main point though. It's never one size fits all and then all your problems are solved.
You are far better off actually assessing your needs and picking the right solution instead of relying on solutions that "worked for bigger companies so they'll work for me" without really giving it a lot of thought if you need to go that far.
That's what containers are. Containers are applications, packaged to be easily deployable and ran as contained processes. That's it.
Kubernetes is just a tool to run containers in a cluster of COTS hardware/VMs.
I've said it once and will say it again: the testament of Kubernetes is simplify so much the problem of deploying and managing applications in clusters of heterogeneous hardware communicating through software-defined networks thay it enable clueless people to form mental models of how the system operates that are so simple that they actually believe the problem is trivial to solve.
It all depends on the situation and needs of whatever problem you are trying to solve.
They often have their own databases, search engines, services etc to deploy along with it. And necessitate multiple instances for scalability and redundancy.
May be, just may be, they want k8s not to create value but to develop/enrich resumes - in order to signal that they are smart and can do complex stuff.
Not to misunderstand. For FogBugz they wrote a compiler/transpiler for Asp and PHP because the product had to run on customers servers - because "clients should not leave their private data at another company".
Google it, great read.
I would recommend going through all of Joel Spolsky’s posts between 2000 and 2010, there are plenty of absolute diamonds. Part of why StackOverflow was so successful was because Joel had built a big audience of geeks and entrepreneurs with his excellent blog posts (he was the Excel PM during the creation of VBA and had plenty of accrued wisdom to share), so they adopted SO almost instantaneously when him and Jeff Atwood built it.
Let me explain.
In computer science jargon a translator IS a compiler. It’s exactly the same thing. Those are synonyms."
Every time someone says "transpiler", god kills a kitten. Please, think of the kittens.
Apparently in 2019 stack overflow was hosted in at least 25 servers, including 4 servers dedicated to run haproxy.
Pet food delivery startups use k8s to manage their MEAN stack. Meanwhile grown-ups still have "monoliths" connected to something like Oracle, DB2 or MS SQL server, because that's obviously the most reliable setup.
The cloud/k8s stuff is an ad-hoc wannabe mainframe built on shaky foundations.
More often than not they just crystalized their 90s knowledge and just pretended there aren't better tools for the job because it would take some work to adopt them and no one notices it in their work anyway.
The "Oracle" keyword is a telltale sign.
What made you settle on a multi-machine setup instead? Was it to reach higher uptime or were you processing very heavy computations per request?
There was little to no room for error. I once introduced a bug in a commit that, in less than an hour, cost us $40,000. So it wasn't about performance.
Also this was 9 years ago. So adjust for performance characteristics from back then.
What were you selling?
Good point, actually the 100MM may have included brick and mortar.
It did analytics on bond deals. Cost $1k/month for an account. Minimum 3 accounts. Median logins, ~1/month/account.
On the other hand people would login because they were about to trade $10-100 million of bonds. So knowing what the price should be really, really mattered.
Wall St can be a funny place.
Heck, Raspberry Pis have more horsepower than the webservers in the cluster I ran around Y2k.
Serving static files with Elixir/Phoenix has a performance of 300 requests per second.
Python+gunicorn serves about 100 requests per second of JSON from postgres data.
Example: What do you expect to happen when the server with your DB goes down? Just send the next UPDATE/INSERT/DELETE to DBserver2? Which is replicated from the DBserver1? When DBserver1 comes back, how does it know that it now is outdated and has to sync from DBserver2? How does the load balancer know if DBserver1 is synced again and ready to take requests?
Even if you set up all moving parts of your system in a way that handles random machine outtages: Now the load balancer is your single point of failure. What do you do if it goes down?
But both you and their tech lead want to be able to write "used Kubernetes" on your CV in the future, plus future-oriented initiatives inside your contact's company tend to get more budget allocated to them. So it's a sound decision for everyone and for the health of the project to just go with whatever tech is fancy enough, but won't get into the way too badly.
Enter Kubernetes, the fashionable Docker upgrade that you won't regret too badly ;)
The cloud existed before k8 and k8's creator has a far less mature cloud than AWS or Azure.
But this thread has convinced me of one thing. It's time to re-cloak and never post again because even though the community is a cut above some others at the end of the day it's still a bunch of marks and if you know the inside it is hard to bite your lip.
It is very rare to have a complete region outage so it is pretty close to 100% uptime.
Kubernetes is not for you. 5kQPS times a hundred or more services and Kubernetes fits the bill.
> And we had 100% uptime in 3 years.
Not a single request failed in that time serving at 20 QPS? I'm a little suspicious.
Regardless, if you were handling 10 or 100 times this volume to a single service, you'd want additional systems in place to assure hitless deploys.
Things that aren't monitored are also things that don't fail.
We are running a relatively small system on k8s. The cluster contains just a few nodes, a couple of which are serving web traffic and a variable number of others that are running background workers. The number of background workers is scaled up based on the amount of work to be done, then scaled down once no longer necessary. Some cronjobs trigger every once in a while.
It runs on GKE.
All of this could run on anything that runs containers, and the scaling could probably be replaced by a single beefy server. In fact, we can run all of this on a single developer machine if there is no load.
The following k8s concepts are currently visible to us developers: Pod, Deployment, Job, CronJob, Service, Ingress, ConfigMap, Secret. The hardest one to understand is Ingress, because it is mapped to a GCE load balancer. All the rest is predictable and easy to grasp. I know k8s is a monster to run, but none of us have to deal with that part at all.
Running on GKE gives us the following things, in addition to just running it all, without any effort on our part: centralized logging, centralized monitoring with alerts, rolling deployments with easy rollbacks, automatic VM scaling, automatic VM upgrades.
How would we replace GKE in this equation? what would we have to give up? What new tools and concepts would we need to learn? How much of those would be vendor-specific?
If anyone has a solution that is actually simpler and just as easy to set up, I'm very much interested.
A few years ago I joined a startup where everything (including the db) was running on one, not-backed-up, non-reproducible, VM. In the process of "productionizing" I ran into a lot of open questions: How do we handle deploys with potentially updated system dependencies? Where should we store secrets (not the repo)? How do we manage/deploy cronjobs? How do internal services communicate? All things a dedicated SRE team managed in my previous role.
GKE offered a solution to each of those problems while allowing me to still focus on application development. There's definitely been some growing pains (prematurely trying to run our infra on ephemeral nodes) but for the most part, it's provided a solid foundation without much effort.
If a group literally doesn't have the need to answer questions like the ones you posed, then OK, don't bother with these tools. But that's all that needs to be said - no need for a new article every week on it.
They probably don't exist for the majority of people using it. We are using k8s for when we need to scale, but at the moment we have a handful of customers and it isn't changing quickly any time soon.
As soon as you go down the road of actually doing infrastructure-as-code, using (not running) k8s is probably as good as any other solution, and arguably better than most when you grow into anything complex.
Most of the complaints are false equivalence: i.e. running k8s is harder than just using AWS, which I already know. Of course it is. You don't manage AWS. How big do you think their code base is?
If you don't know k8s already, and you're a start-up looking for a niche, maybe now isn't the time to learn k8s, at least not from the business point of view (personal growth, another issue).
But when you do know k8s, it makes a lot of sense to just rent a cluster and put your app there, because when you want to build better tests, it's easy, when you want to do zero trust, it's easy, when you want to integrate with vault, it's easy, when you want to encrypt, it's easy, when you want to add a mesh for tracing, metrics and maybe auth, it's easy.
What's not easy is inheriting a similarly done product that's entirely bespoke.
We ran applications without it fine a few years ago. And it was a lot simpler.
As in, doesn't get hacked or doesn't go down? We live in different worlds.
This seems like a fairly unreasonable comparison. The reason I pay AWS is so that I _do not_ have to manage it. The last thing I want to do is then layer a system on top that I do have to manage.
As a practitioner or manager, you need to make informed choices. Deploying a technology and spending the company's money on the whim of some developer is an example of immaturity.
Think again. There's plenty of SREs at FAANGs that dislike the unnecessary complexity of k8s, docker and most "hip" devops stuff.
Now imagine you have to do it from scratch.
I think the real alternative is Heroku or running on VMs, but then you do not get service discovery, or a cloud agnostic API for querying running services, or automatic restarts, or rolling updates, or encrypted secrets, or automatic log aggregation, or plug-and-play monitoring, or VM scaling, or an EXCELLENT decoupled solution for deploying my applications ( keel.sh ), or liveness and readiness probes...
But nobody needs those things right?
I have seen too many projects burn money with vendor independence abstraction layers that were never relevant in production.
It's also worth noting that Fargate has actually gotten considerably cheaper since we started using it, probably because of the firecracker VM technology. I'm pretty happy with Fargate.
In my experience AWS generally gives at least a year's notice before killing something or they offer something better that's easy to move to well in advance of killing the old.
Hell, they _still_ support non-VPC accounts...
"Vendor lock-in" is guaranteed in any environment to such a degree that every single attempt at a multi-cloud setup that I've ever seen or consulted on has proven to be more expensive for no meaningful benefit.
It is a sucker's bet unless you are already at eye-popping scale, and if you're at eye-popping scale you probably have other legacy concerns in place already, too.
Running your applications was a solved problem long before k8s showed up.
The whole point of k8s, the reason Google wrote it to begin with, was to commoditize the management space and make vendor lock-in difficult to justify. It's the classic market underdog move, but executed brilliantly.
Going with a cloud provider's proprietary management solution gives you generally a worse overall experience than k8s (or at least no better), which means AWS and Azure are obliged to focus on improving their hosted k8s offering or risk losing market share.
Plus, you can't "embrace and extend" k8s into something proprietary without destroying a lot of it's core usability. So it becomes nearly impossible to create a vendor lock-in strategy that customers will accept.
Eg. Sidecar pattern resolves most things (eg. logging)
Sure. @levelsio runs Nomad List (~100k MRR) all on a single Linode VPS. He uses their automated backups service, but it's a simple setup. No k8s, no Docker, just some PHP running on a Linux server.
As I understand it, he was learning to code as he built his businesses.
Thanks to k8s, we generally keep to 1/5th of the original cost, thanks to bin packing of servers, and sleep sounder thanks to automatic restarts of failed pods, ability to easily allocate computing resources per container, globall configure load balancing (we had to scratch use of cloud-provider's load balancer because our number of paths was too big for URL mapping API).
Everything can be moved to pretty much every k8s hosting that runs 1.15, biggest difference would be hooking the load balancer to the external network and possibly storage.
Yes but that is also the worst already that you could criticize about k8s.
Complexity is dangerous because if things are growing beyond a certain threshold X you will have side effects that nobody can predict, a very steep learning curve and therefor many people screwing up something in their (first) setups as well as maintainability nightmares.
Probably some day someone will prove me wrong but right now one of my biggest goals to improve security, reliability and people being able to contribute is reducing complexity.
After all this is what many of us do when they refactor systems.
I am sticking with the UNIX philosophy at this point and in the foreseeable future I will not have a big dev-team at my disposal as companies like Alphabet have to maintain and safe-guard all of this complexity.
It does a bunch of junk that is trivial to accomplish on one machine - open network ports, keep services running, log stuff, run in a jail with dropped privileges, and set proper file permissions on secret files.
The mechanisms for all of this, and for resource management, are transparent to unix developers, but in kubernetes they are not. Instead, you have to understand a architectural spaghetti torrent to write and execute “hello world”.
It used to be similar with RDBMS systems. It took months and a highly paid consultant to get a working SQL install. Then, you’d hire a team to manage the database, not because the hardware was expensive, but because you’d dropped $100k’s (in 90’s dollars) on the installation process.
Then mysql came along, and it didn’t have durability or transactions, but it let you be up and running in a few hours, and have a dynamic web page a few hours after that. If it died, you only lost a few hours or minutes of transactions, assuming somone in your organization spent an afternoon learning cron and mysqldump.
I imagine someone will get sufficiently fed up with k8s to do the same. There is clearly demand. I wish them luck.
Today it's much easier to package nicer API on top of the rather generic k8s one. There are ways to deploy it easier (in fact, I'd wager that a lot of complexity in deploying k8s is accidental due to deploy tools themselves, not k8s itself. Just look at OpenShift deploy scripts...)
Here is a comparison with other frameworks, from 2018:
Wow as a new developer coming onboard your company, I will walk out the door after seeing that, and the fact that you admit its a small serivce.
It's a small service according to " web scale", but it's serving, and vital for, a good number of customers.
As an example, why can’t ConfigMap and Secret just be plain files that get written to a well known location (like /etc)?
Why should the application need to do anything special to run in kubernetes? If they are just files, then why do they have a new name? (And, unless they’re in /etc, why aren’t they placed in a standards-compliant location?)
If they meet all my criteria, then just call them configuration files. If they don’t, then they are a usability problem for kubernetes.
Maybe you don't personally find value in the abstraction, but there are certainly people who do find it useful to have a single resource that can contain the entire configuration for a application/service.
As the other user said, they can also be multiple files. I.e. if I run Redis inside my pod, I can bundle my app config and the Redis config into a single ConfigMap. Or if you're doing TLS inside your pod, you can put both the cert and key inside a single Secret.
The semantics of using it correctly are different, somewhat. But you can also use a naive approach and put one file per secret/ConfigMap; that is allowed.
- are automatically assigned to an appropriate machine(node) based on explicit resource limits you define, enabling reliable performance
- horizontally scale (even automatically if you want!)
- can be deployed with a rolling update strategy to preserve uptime during deployments
- can rollback with swiftness and ease
- have liveness checks that restart unhealthy apps(pods) automatically and prevent bad deploys from being widely released
- abstracts away your infrastructure, allowing these exact same configs to power a cluster on-prem, in the cloud on bare metal or vms, with a hosted k8s service, or some combination of all of them
All of that functionality is unlocked with just a few lines of config or kubectl command, and there are tools that abstract this stuff to simplify it even more or automate more of it.
You definitely want some experienced people around to avoid some of the footguns and shortcuts but over the last several years I think k8s has easily proven itself as a substantial net-positive for many shops.
Heck, if my needs are simple enough why should I even use ECS instead of just putting my web app on some VM's in an auto-scaling group behind a load balancer and used managed services?
When you start having several services that need to fail and scale independently, some amount of job scheduling, request routing... You're going to appreciate the frameworks put in place.
My best advice is to containerize everything from the start, and then you can start barebones and start looking at orchestration systems when you actually have a need for it.
-you can use small EC2 instances behind an application load balancer and within autoscaling groups with host based routing for request routing.
- converting a stand-alone api to a container is not rocket science and nor should it require any code rewrite.
- if you need to run scheduled Docker containers that can also be done with ECS or if it is simple enough lambda.
- the first thing you should worry about is not “containerization”. Its getting product market fit.
As far as needing containerization for orchestration, you don’t need that either. You mentioned Nomad. Nomad can orchestrate anything - containers, executables, etc.
Not to mention a combination of Consul/Nomad is dead simple. Not saying I would recommend it. In most cases (I’ve used it before), but only because the community and ecosystem is smaller. But if you’re a startup, you should probably be using AWS or Azure anyway so you don’t have to worry about the infrastructure.
How are you managing your infrastructure and if you have that already automated how much effort is it to add the software you develop to that automation vs the ROI of adding another layer of complexity ?
The idea everything needs to be in containers is similar to the idea everything needs to be in k8s.
Let the business needs drive the technology choices don't drive the business with the technology choices
Valid reasons to not run containerized in production can be specific security restrictions or performance requirements. I could line up several things that are not suitable for containers, but if you're in a position of "simple but growing web app that doesn't really warrant kubernetes right now" (the comment I was replying to), I think it's good rule of thumb.
I agree with your main argument, of course.
If you are managing your systems who already have a robust package management layer then adding the container stacks on top of managing the OS layers you have just doubled the systems your operations team is managing.
Containers also bring NAT and all sorts of DNS / DHCP issues that require extremely senior well rounded guys to manage.
Developers dont see this complexity and think containers are great.
Effectively containers moves the complexity of managing source code into infrastructure where you have to manage that complexity.
The tools to manage source code are mature. The tools to manage complex infrastructure are not mature and the people with the skills required to do so ... are rare.
Oh yeah, if you're not building the software in-house it's a lot less clear that "Containerizate Everything!" is the answer every time. Though there are stable helm charts for a lot of the commonly used software out there, do whatever works for you, man ;)
> Containers also bring NAT and all sorts of DNS / DHCP issues that require extremely senior well rounded guys to manage.
I mean, at that point you can just run with host mode networking and it's all the same, no?
Monitoring can be done with whatever your cloud platform provides.
Also easier to debug and monitor... but you run your business to make developers happy right ?
Source? I’ve never heard of someone going from “what’s kubernetes?” to a bare metal deployment in 4 hours.
The basic concepts in k8s are also pretty easy to learn, provided you go from the foundations up -- I have a bad feeling a lot of people go the opposite way.
A high level person actually asked me to reimplement vCenter :|
It is the job of the CTO to steer excitable juniors away from the new hotness, and what might look best on their resumes, towards what is tried, true, and ultimately best for the business. k8s on day one at a startup is like a mom and pop grocery store buying SAP. It wouldn't be acceptable in any other industry, and can be a death sentence.
K8s solved very real problems that might not be seen when you're running one app, shitty standard syslog and cloud-provided database. But those problems still exist, and k8s provided real, tangible benefit in operation where you don't need to remember thousand and one details of several services, because you have a common orchestration to use as abstraction.
I just want to solve problems with ideally as little complexity as possible.
Nothing cynical about it.
The longer I've been at the company I'm at, the less interested I am in how cool something is and the more interested I am in the least effort possible to keep the app running.
Interestingly the older generation often had the most reservations against hosting data on external systems. They are generally very big on everything surveillance though.
I used GKE and I was also very familiar with k8s ahead of time. I would not recommend someone in my shoes to learn k8s from scratch at the stage I rolled it out, but if you know it already, it’s a solid choice for the first point that you want a second instance.
Lots of ink spilled on irrelevant concepts that most users don't need to know or care about like EndpointSlices.
And, arguing against microservices is a reasonable position -- but IF you have made that architectural choice, then Docker-for-Mac + the built-in kubernetes cluster is the most developer-friendly way of working on microservices that I am aware of. So a bit of a non-sequiteur there.
Building with 12-factor principles make that transition effortless when the time comes.
Plain docker - hell on earth. Literally some of the worst stuff I had to deal with. A noticeable worsening vs. running the contents of the container unpacked in /opt/myapp.
Heroku, Dokku - Very depends. A dance between "Simple and works" and "Simple and Works but my Startup is bankrupt".
K8s - Do I have more than 1-2 custom deployed applications? Predictable cost model, simple enough to manage (granted I might be an aberration), easy to combine infrastructural parts with the money-making parts. Strong contender vs Heroku-like, especially on classic startup budget
Integration with init system was abysmal. Docker daemon had its own conventions it wanted to follow. Unless you ensure that the state of the docker daemon is deleted on reboot, you could have weird conditions when it tried to handle starting the containers by itself.
A very easy thing to use for developer, a pretty shitty tool (assuming no external wrappers) on server.
One of the greatest joys of k8s for me was always "it abstracts docker away so I don't have to deal with it, and it drops pretty much all broken features of docker"
It also offers portability away from docker via the Container Runtime Interface; we use containerd and it has been absolutely rock solid, without the weird "what happens to my containers if I have to restart a wedged dockerd?" situation
Since then we got CRI-O and life looks even better.
the docker run --rm command line switch tell docker to remove the container when it dies. Never a problem on restarts.
If you are an operator or k8s then you've entered the nightmare zone where you have to make all of those endpoints actually work and do the right thing with code that written 17 days ago. Unlimited terrible middleware to try and form static services into dynamic boxes.
k8s was not designed for someone to deploy on-prem that doesn't have a dedicated team of developers and ops people to work on just k8s.
My biggest cryparty story when it comes to on-prem kubernetes is not actually due to kubernetes, but due to Red Hat. There are words I could say about their official OpenShift deployment scripts and their authors, but I would be rightly banned for writing them.
Biggest issues I've encountered involve things on the interface between "classic distro" and running k8s on top of it, and that goes down when you move towards base OS that is more optimized for k8s (for example flatcar).
When it comes to the size of team involved, I'd say keeping "classic" stack with loadbalancers, pacemaker, custom deployment methods etc. was comparable effort - at least if we're matching feature for feature what I'd see on "base" k8s on-prem setup (and based on "what I wish I had, or that I could replace with k8s, back in 2016 for an on-prem project).
There's one thing, however, where it gets much harder to deploy k8s and I won't hide it - when you're dealing with random broken classic infrastructure with no authority to change it. K8s gets noticeably harder when you need to deal with annoying pre-existing IP layouts because you can't get IP address space allocated, when you have to sit on badly stretched L2 domains, where the local idea of routing is OSPF - or worse, static routes to gateway and a stretched L2. To the point that sometimes it's easier to setup a base "private cloud" (I like VMware for this) on the physical machines, then setup the hosts on top of that - even if you're only going to have one VM per hypervisor. The benefits of abstracting possibly broken infrastructure are too big.
Hahaha… welp. So you’re saying that stretching every “overlay” L2 domain to every hypervisor/physical host with VXLANs and OSPF isn’t maintainable. Color me surprised. I need a drink.
Dealing with overstretched VLANs where somehow, somehow, STP ("What is that RSTP you're telling us about?") decided to put a "trung" across random slow link >_>
As for EBS, remember that the SLAs for EBS are not the same as for S3, and that EBS volumes can be surprisingly slow (especially in IOPS terms once you go above certain limit, I don't have the numbers in cache at the moment). So it's important to have good backup/recovery/resiliency plan for anything deployed on EC2 or dependant on EBS volumes. Planning for speed is mostly a case when you do need a more custom datastore than offerred by AWS.
Remember that AWS definitely prefers applications that go all vendor lock-in for various higher-level options from them, or ones that are at least "cloud native". Replicating an on-prem setup usually ends up in large bills and lower availability for little to no gain.
The last time I installed ubuntu 18.04, DNS queries took ~5 seconds. It’s a well known issue, with no diagnosed root cause. The solutions involved uninstalling the local dns stack, starting with systemd’s resolver.
2018 was well after DNS was reliable. How can stuff like that break in a long term support release?
Turns out a lot of people just ignored the fact that all configured DNS servers are assumed to serve the same records and used DNS ordering to implement shitty split-horizon DNS.
I guess backups could just be snapshots or something, depending on how active the database is. ;)
You know, managing some yml files that describes all of this it's so hard and so expensive...
Downtime doesn't happen until it does.
You've had a working system very quickly and saved plenty of money that you were able to invest into more features or runway though.
I believe that too many engineers worry about "what if a million users sign up tomorrow" and plan a system that will handle that (which also happens to be fun and tickles all the right places), which takes a lot of time and money and manpower instead of building something that works reasonably well and worrying about building the "right" solution when they're beginning to actually grow rapidly. I'd much rather hear "our servers can't handle the load, there are too many users signing up" than "when we're done here in six months, our system will be able to handle any load you throw at it".
I wouldn't say that it's a no-go (there absolutely are situations where it makes sense), but it often looks like premature optimization.
There are a ton of successful startups who made it a long way with less than 2 VMs.
If Kubernetes had only cost us a year and two hundred thousand dollars then we'd have been luckier than we actually are.
It definitely has a place, but it is so not a good idea for a small team. You don't need K8s until you start to build a half-assed K8s.
which you start doing with more than 1 db server and more than one app server.
until you realize you have a ansible script that is tied to your specific program. oh shit now you have two programs, and copied some stuff from ansible, but not everything is the same - damn.
deployments occur a downtime (5 seconds), which some user notice - until you add like a thousand lines of ansible code.
now you need monitoring and oh shit another ansible bloat, soon you outgrown k8s codebase with just ansible.
(p.s. this does not account how you start your baremetal servers, etc. this would be another story)
Then they might simply join another startup or a big tech company as competition for good engineers is fierce. Startups also famously underpay versus larger companies so you need to entice engineers with something.
I mean, seriously, this is a startup killer. Our host wrote an essay a long time ago about beating standard companies stuck in boring old Java or C++ with your fast, agile Python code, but in 2020 it seems to me it's almost more important now to try to convince new startups to be a little more boring. Whatever your special sauce that you're bringing to market is, it isn't (with no disrespect to the relevant communities) that you're bringing in Rust or Nim or whatever for the first time ever, for Maximum Velocity. Just use Python, cloud technologies, and established databases. Win on solving your customer needs.
While by no means is everyone in the world using effective tech stacks well-chosen to meet the needs and without over-privileging "what everyone else is doing and what has always been done", enough people are now that it's probably not a competitive advantage anymore.
Honestly, you can beat most companies in just getting stuff out the door quickly.
(Excuse me, off to file incorporation papers for ZeroNines LLC. Why wonder whether your provider will be up when you can know? Nobody else in the business can make that promise!)
Money may not be the limiting factor for a startup and time is a counter-factual as you don't know the alternative. Had they not been able to hire any engineers they may have taken an extra 2 years to ship the same thing. Or maybe not.
Hiring at startups is time consuming and difficult with heavy competition for good engineers. Salaries lag behind large tech companies and equity may be worth nothing. Scale isn't there so the problems are less interesting than at a larger company. And good engineers can 5x better than average in an early stage startup because there is no process and technical debt is fine (in a larger organization the 10x ones leave havoc in their wake).
That may not be the right decision for a startup to make but there is a logical basis for making it.
New hotness is one way to entice them. Far, far from the only one but it is a tool in the CTO's tool belt.
I've rejected candidates who have been great on their technical skills . . . who I would never want to be making ANY decisions about customers, or the technical direction of the company.
My team right now, for example, had a mantra of "No JS frameworks, just Rails" which was absolutely dreadful. Rails UI is absolutely dreadful. I can't say enough, it is absolutely dreadful. So we recently made the move to use React for more "dynamic" UIs, which has brought up somewhat of a happy medium? React will be here in 5 years, Rails will be here in 5 years, everyone wins.
I hope, but doubt, that will happen before I'm retired or have been ageism'd into something else.
Only the bad ones.
You're also setting up a bill that will come due eventually. I've made some really good money going into companies and ripping out the hot-three-years-ago garbage that some long-gone goof put in. Last time this happened I looked up the responsible party. Turned out he was doing the work so he could give a conference talk on shiny tech. Not long after the talk was done, he took a job somewhere else, leaving his buzzword-compliant half-baked system to rot.
Why? Because Docker and "scalability" it offered looked much better on the investor slides...
How? Instead of actually hiring someone that at least has experience with docker he decided it was very easy thing to learn so he did it him self and we ended up with thing like running two application in a same container, having database on containers that disappear (container restarts when ram is full is the best way to run a database) etc...
And after all that started talking about mirco-services and how cool are they for code decoupling... Of all the things... I don't work there anymore...
And if challenged on these reasons some people give (that supposedly have more experience) blanket statements like: "Docker is more safe by default thus we should use it..."
Maybe when you go thought these situations you get to write articles like this.
Of course containers, docker, k8 etc have their place, but in reality you can find all kinds of stunning non-sense.
Sometimes it's the CTO who is the excitable one pursuing new hotness...
I can’t understand for the life of me why any start up uses it, it’s insane.