I don't like to sound like a grumpy old man that yells at new tech, I'm all good for experimentation home setups involving these, but every component you add is another thing to manage and monitor, even if you are a small business, you are still a business that you can spend your time on better things and sleep better at night.
Many small and medium businesses have a "business" ISP circuit. This typically doesn't have caps and has the option of a dedicated IP. Speaking from experience I was an ISP network engineer for a big telco years ago.
Unless you're talking about a huge magnitude of infra (which you'd have a budget for) then I think your 4-6 number is a little off.
I do think small business and individual shouldn't use kubernetes because it's too much overhead. Even if it's managed in a public cloud. It has too many issues with too little benefits. It's simply not worth it.
Of course if writing YAML files in YAML and golang templates, or having to deal with network errors for no reason is your thing, go for it.
But you also have an easy way to scale, in a lucky chance you have to.
Its hard to put into words without writing a blog post, but in my mind the 'everything is a resource' design introduces a simplicity that counterbalances the complexity that comes with it. One still has to make an informed decision, of course, and sometimes it really is not the right tool ...
For years I bemoaned that it doesn't scale down to the scale where most engineers work. I was wrong about that, it didn't scale down to the amount of learning I was willing to do for any one project at the scale I was doing them.
SOOO many times have I had to try to make my tools do something they weren't really capable of doing. And while it feels like a win today it never does tomorrow.
With Kubernetes most of my issues come from other things than that itself. Digital Ocean has a great small scale experience. AWS EKS has been quite a mess in comparison in terms of Ops time required. This made me realize that maybe people are suffering with ineffifcenies from the cloud provider, not Kubernetes itself.
I understand that my local harddrive with excel spreadsheets detailing my entire business along side critical software is not the best place to store the lifeblood of my company, but throwing in the complexity of k8s in there and STILL not addressing the problem seems to be completely besides the point.
Digital Ocean managed k8s starts at $30 a month. Use that if you really want to pile on more stress running a company.
I find it amusing I spend more on Kates than cars
This is assuming you have some kind of staging environments, probably some demo, all of these come with at least one database instance. Maybe it's not something as memory polite as PostgreSQL. Etc etc, real-world setups tend use a bit of hardware.
If I wanted to save money, time and sleep I would deploy a production cluster to DO. Set up an office k8s cluster on local hardware for staging and I would host my own testing there. If it goes down I would deploy it temporarily in DO.
It's simple, and has all the features I need for the stage I'm at. Once I have a more demanding infrastructure I'll probably switch over to a managed kubernetes, but those platforms have a significantly higher unit cost.
Asking because from what I've seen (so far) it's only safe to use an external firewall - eg Digital Ocean (or whoever) firewall applied to hosts - rather than iptables on the hosts themselves.
Saying that because when starting the Docker service on a machine, the ~first thing it does is screw with any existing firewall config to ensure it can pipe data around between things. As a side effect, it seems to open up ports to the whole world. So any carefully secured config beforehand becomes useless. :(
eg go bigger (hosts) first, then figure out horizontal scaling later if/when needed :)
If something goes wrong it will call for help, before
you even know something was wrong.
It depends on what you define a small business as though.
Back in the days these were extremely reliable.
I have seen them in closets, covered with everything else you
would find in a closet.
Used by everyone day in and day out, nobody knew where it was
or what it was.
It also, in my opinon, had a nifty operating system with a lots of
inventive ideas when it came out.
I wish they had released the OS when they discontinued the line.
My current server is hardware from 2012.
Someone said that just a few weeks ago 'Cars you buy today are all good enough. You will not buy a car today and it will rust tomorrow.'
The only reason why i migrated my desktop pc to become my server, was the new DSLR which made Lightroom unusable due to the raw image size.
I might even think, that Computer Hardware in general is very very reliable. Besides the capacitor plague issue https://en.wikipedia.org/wiki/Capacitor_plague that gave systems a bad lifetime.
And i have plenty of systems here which are running for years and have seen plenty of ubuntu upgrades as well.
Are we just creating complexity for the sake of feeling we are solving something? This system would still go down if there is a fire, which is IMO more likely. Rebuilding the cluster would take time, why not solve the issue of hand-made local deployments?
1. Goto digitalocean.com or linode.com
2. Buy the smallest instance.
3. Install FreeBSD.
4. Let it run for the next 2 decades without ever needing to touch it.
Get someone to host (and backup) your PostgreSQL database for you and don't sweat literally everything else. 99.999% uptime is for Walmart.
YMMV for the appserver and its dependencies.
However, if you still would like to pre-provision the NFS share as a PV/PVC you are also able to do that and expose the NFS mount as a Kubernetes Object.
Just wondering, now that NFS mounts are directly available within Kubernetes, is there any advantages to utilizing a "3rd party" NFS Client Provisioner Helm deployment?
Anyways, looks like a really great write up. Although Kubernetes can be a big hassle for small teams, I feel like its slowly becoming more and more user accessible and user friendly to more organizations. Even with the mental overhead and hassle of running kubernetes, my team at work has seen some clear benefits from going to an ec2/ansible based architecture to a GKE/helm based architecture.
Creating a PVC is as simple as:
After a while, I've realized that it was a bad idea, because it turns out I don't run many services so creating a share for each service is a big deal, and that storing the StatefulSet volumes on a single server basically defeats the purpose of StatefulSets.
The second problem was that when I deployed it to a new cluster, I would get new paths on the NFS server, so the data from a previous cluster was not recognized. Since it is a lab environment, it is better for me to not bother with backing up the k8s cluster. With set paths, I can backup the NFS server, and redeploy a clean k8s cluster from the manifests and have the pods recognize previously written data.
As far as I can tell, the provisioner is mostly beneficial if you need to create volumes on the fly (although its possible that this is also possible with directly mounting NFS shares). On another note, have you looked at using one of the many ways of using persistent storage with GKE that isn't NFS? It seems that Google has created connectors and stuff for their own cloud storage that might be more straightforward to use.
* for small business
- IaC, no servers to manage (Terraform + kubectl/helm)
- reasonable HA with minimal effort
- easily configurable metrics (prometheus)
- automated/minimal effort horizontal scaling
- easy SSL certificate installation and automated renewal
- operate multiple low traffic sites from a single cluster
- quickly access your environment through a single ide (https://k8slens.dev/)
So what's the issue, you ask? Well, if you're averaging over one week, and one of these jobs ran for exactly one hour with 1.0 CPU, then the average for that particular job is 1.0/24/7 = 0.00595! Now imagine averaging these numbers and wondering WTF has gone wrong.
2. So do CoLocos, but better "HA" with no effort at all
3. /var/log + bash grep
4. Reverse proxy, just add another IP address and a load balancer
5. This is true anywhere
6. Nginx and apache both do this, dead simply
7. SSH + cli editor of your choice
Kubernetes does nothing new, but pad more resumes. The UX "design" of the dev-ops world.
2. "do CoLocos" definitely has effort and does not in anyway guarantee HA. (HA is really more of a software application layer concern not hardware) Also, are you telling me you are going to engineer an HA load balancer service better than the people who work for the major cloud vendors? Are you going to setup all your own service discovery, scheduling and everything else k8s does for your application?
3. Prometheus and other solutions are so much more than "/var/log + bash grep" you can't possibly seriously equate the two. Serious applications demand serious metric solutions.
4. Sure let me do all that while my site starts getting the HN hug of death or you know I could do nothing and let my infrastructure autoscale for me. Even in a non-autoscale setup, all you have to do is change a couple of numbers in your terraform and k8s config files and reapply.
5. No it isn't. For some webservers like Caddy it's easier than others, but in many cases you are going to have to a good amount of manual steps to get auto renewal working for your Let's Encrypt certs. On K8s it's basically a config file and a helm chart install. Plus you have K8s guaranteeing that it will restart the renewal service should it go down.
6. Yes you can specify config in your webserver for the routing, but being able to run completely distinct application servers on the same nodes handling all the routing and service resiliency takes a lot of effort, while not running into conflicts with your existing applications that are already running. You are going to get a lot better bang for your buck running multiple applications on the same set of nodes than having to provision separate nodes for the other applications.
7. SSH + cli does not give you a cluster wide view of your application. You would have to feed all that information into a single location to be able to view it, which would take a lot of effort and still wouldn't provide the same level of detail.
As a small business, I'd much rather have well engineered solutions that will be able to grow with my business. A lot of what you describe might be okay when your business is very small and has a single server (basically you can afford to treat it as a pet), but as your business grows you want to avoid having to rework various parts of your infrastructure if possible. Managed k8s gives you a lot of room to grow with relatively little upfront time investment.
Currently using all of the tooling you've mentioned on-premise and it's great! Using DNS Anycast (cloudflare/akamai/insert provider) and LB technology we can get highly available without much work. We've got more 9's than our Azure production setups, sadly though I feel like Azure isn't a good standard to compare to.
Bridging the gap from the cloud provider managed services to on-premise (colo) is actually relatively simple. By all means 95% of the tooling these providers use is Open source or has a major open source alternative.
SMB world does change that to your point as I doubt you'll have a devops rockstar who can actually grok these things, more than likely you'll have a dinosaur MSP or IT guy. (for really small companies)
If small businesses _did_ have a Kube cluster in their closets, and were able to decide the appropriate use of SaaS versus on-prem, and were able to carve up that compute securely to their customers, well it would enable a whole range of companies to compete with the big platforms. This is exactly what I'm building with my startup. When there is no real difference between hosted and self-hosted, besides the power and air conditioning, a huge new market for software opens, and the platforms that have (in my opinion) "ruined" the internet will have some real competition. The Internet doesn't need blockchain or federated compute to solve this - we just need people to run server-software again! All individuals and small businesses need are really good tools - the decision re: cloud or on-prem should be dependent on engineering resources, not capital resources.
Say what you will about Kubernetes - It reminds me so much of companies looking into Linux two decades ago. Someone would always say "eh, too complex, just use IIS and call Microsoft when things are broken!". To make the world simpler, you must first make the world more complex.
It's about providing leverage to your team so you need less effort to do more. And SMB can be anywhere up to 1000 employees if using semi-legal definitions, 500 employees and under 1 billion euro of turnover in case of EU's SME definition.
Depending on the area of work, there can be a lot of software to run. And that's where k8s comes in. It might seem "hard" when you're comparing it to "just throw few packages on a server", but having gone through that recently? The next time I'm setting up a single-node server to support tools for a project, I'm using k3s instead of spending 3-5 days getting software to play nice with paths changed to non-standard locations in order to keep data easy to backup and migrate to another server. Or wrestling load balancer (and ours is IaC even! Still not as nice and easy to deal with as nginx-ingress-controller...).
The real "scaling" of kubernetes is that once you pass the initial hurdle, the cost and complexity of adding more applications is greatly lowered. Even when you do a dumb lift-and-shift (like I did for >60 applications in one project).
By no-ones metric is 1000 employees a small business. The EU defines a small business as under 50 staff, and a medium business as under 250 staff.
In this area, k8s had been personal saviour where we could not take on a project otherwise. Just thinking of dealing with ~70 applications that had subtly different requirements leaves me shuddering. And that was pretty small deal.
Can someone explain what he means?