Hacker News new | past | comments | ask | show | jobs | submit login
Highly available Kubernetes with batteries for small business (cinaq.com)
130 points by xiwenc 46 days ago | hide | past | favorite | 67 comments



It feels like a huge hassle for a small business. Use a cloud, it's not that expensive. If, not even using a cloud is an option, you can still mount NFS, use a bridged network and use DHCP from that router, and run your software in systemd/upstart or simple Docker containers.

I don't like to sound like a grumpy old man that yells at new tech, I'm all good for experimentation home setups involving these, but every component you add is another thing to manage and monitor, even if you are a small business, you are still a business that you can spend your time on better things and sleep better at night.


For someone who is interested and capable the costs are an order of magnitude lower for self hosting. Your right for the most part small business would be better off using an existing cloud, they dont have the skillset to properly design and manage something like this themselves.


Even with the skill sets, i am not sure the costs are lower in most cases, depending on where you are your uplink costs can be substantial, you might have a asymmetric link or your ToS with the ISP may forbid any hosting services or just poor internet speeds. Without a leased line ISPs are not going to give a uptime SLA or a ring topology for redundancy etc, Reliability is expensive, there are enough cheap k8s/VPS providers at sub $50 month (For ex Digital Ocean).


You are right about rate limited internet. Something else to consider is renting some rack space is super cheap and can come with free internet and electricity. Last time I checked a few years ago in DFW it was $60 a month with 1Gb/s and some IP addresses for 1RU.

Many small and medium businesses have a "business" ISP circuit. This typically doesn't have caps and has the option of a dedicated IP. Speaking from experience I was an ISP network engineer for a big telco years ago.


Does that cost estimation include the 4-6 devops freelancers you're going to need for like 6 months to get it going? The bigger cost is in staffing than cloud bills unless you're running really a lot of infra..


If you need 4-6 devops people to run infra from the ground up you're doing it wrong. I've seen 4 man devops/ops teams run thousands of machines at scale.

Unless you're talking about a huge magnitude of infra (which you'd have a budget for) then I think your 4-6 number is a little off.


> Kindie (Kubernetes Individual) is an opinionated Kubernetes cluster setup for individuals or small business.

I do think small business and individual shouldn't use kubernetes because it's too much overhead. Even if it's managed in a public cloud. It has too many issues with too little benefits. It's simply not worth it.

Of course if writing YAML files in YAML and golang templates, or having to deal with network errors for no reason is your thing, go for it.


My experience using GKE so far has been fairly good - it's not that much more difficult than using Google App Engine, and we're not locked into a single provider.


Once you know it, it's not really an overhead anymore, you do the same tasks as fast as without it.

But you also have an easy way to scale, in a lucky chance you have to.


The overhead might get smaller but it never disappears. You'll still have maintenance and debugging issues that are specific to k8s.


True, but it also takes away some pain you'd otherwise have. I used to be in the 'Kubernetes is unnecessary overhead for most cases' camp, but since I actually started to use it I think it's a good base for even smaller setups, and I find myself prefering it to other ways of reproducably setting up infrastructure/services. Maybe that's not the case for the people posting here, but often times it seems people who argue against Kubernetes have never really used it and don't have more than a shallow knowledge.

Its hard to put into words without writing a blog post, but in my mind the 'everything is a resource' design introduces a simplicity that counterbalances the complexity that comes with it. One still has to make an informed decision, of course, and sometimes it really is not the right tool ...


This is exactly my experience.

For years I bemoaned that it doesn't scale down to the scale where most engineers work. I was wrong about that, it didn't scale down to the amount of learning I was willing to do for any one project at the scale I was doing them.

SOOO many times have I had to try to make my tools do something they weren't really capable of doing. And while it feels like a win today it never does tomorrow.

With Kubernetes most of my issues come from other things than that itself. Digital Ocean has a great small scale experience. AWS EKS has been quite a mess in comparison in terms of Ops time required. This made me realize that maybe people are suffering with ineffifcenies from the cloud provider, not Kubernetes itself.


> This setup is not trully highly available. The whole cluster depends on the Synology as data storage. You could improve this further by replacing the centralized NAS with a distributed solution. But besides that the cluster is very solid and scalable

I understand that my local harddrive with excel spreadsheets detailing my entire business along side critical software is not the best place to store the lifeblood of my company, but throwing in the complexity of k8s in there and STILL not addressing the problem seems to be completely besides the point.

Digital Ocean managed k8s starts at $30 a month. Use that if you really want to pile on more stress running a company.


exactly, this makes absolutely no sense for a small business. Managed DO / GKE is a much better idea, maintaining K8's is the last thing a small business should focus on.


GKE can be under that for a single node, a real setup starts at a few hundred dollars no matter where you are.

I find it amusing I spend more on Kates than cars


I've been running DO k8s for about a year, works great. But in reality, you won't be paying $30. The node size that gives you is tiny and almost any setup will be a few hundred per month.

This is assuming you have some kind of staging environments, probably some demo, all of these come with at least one database instance. Maybe it's not something as memory polite as PostgreSQL. Etc etc, real-world setups tend use a bit of hardware.

If I wanted to save money, time and sleep I would deploy a production cluster to DO. Set up an office k8s cluster on local hardware for staging and I would host my own testing there. If it goes down I would deploy it temporarily in DO.


Was thinking the same. When I look at infrastructure I look at SPOF. This has such a glaring one that I think why go through this ordeal? Even managing later and trouble shooting it will be such a pain in the but...


As long as you have enabled backups, this is a fairly reasonable solution. And on a 'Product' NAS like Synology backup is fist-class, 1-click solution.


I've been using docker swarm on Hetzner for my fledgling business and am quite happy with it. The main feature I use is rolling upgrades that make it very easy to deploy changes.

It's simple, and has all the features I need for the stage I'm at. Once I have a more demanding infrastructure I'll probably switch over to a managed kubernetes, but those platforms have a significantly higher unit cost.


With your setup, are you using a static front end web server(s?) - eg nginx - reverse proxying to the applications managed with Docker Swarm?


Yes exactly, I have an API and a couple of static sites exposed via nginx. None of the other services have exposed ports and all the networking is done through docker.


Just to double check (as a safety measure), are you using an external firewall for the docker container to ensure nothing can access them, or are you using customised firewall rules on the docker hosts?

Asking because from what I've seen (so far) it's only safe to use an external firewall - eg Digital Ocean (or whoever) firewall applied to hosts - rather than iptables on the hosts themselves.

Saying that because when starting the Docker service on a machine, the ~first thing it does is screw with any existing firewall config to ensure it can pipe data around between things. As a side effect, it seems to open up ports to the whole world. So any carefully secured config beforehand becomes useless. :(


Had the same issue with swarm and iptables/ufw. The solution I found was to replace nginx with traefik and never expose service ports out of the swarm. All HTTP/TCP/UDP termination is done by Traefik (itself running in Swarm too). So far the experience has been good. However, you need to bind the traefik container to the host network. Otherwise you will get not be able to get the remote host IP properly (x-forwarded-for). I am mostly doing scale up (Hetzner can go up to 256GB RAM and dedicated CPUs). This saves me the hassle of managing distributed storage (I just use local docker volumes and schedule a snapshot of the whole drive as a local backup at Hetzner). Should you need to scale traefik horizontally (e.g. > ~30K req/s), then you would need something like e.g. Consul to manage the distributed configuration. Dealing with docker volumes on a distributed storage may be harder though. Would be glad if someone can share their experience doing that.


Interesting. Sounds like a reasonable path go with when initially growing.

eg go bigger (hosts) first, then figure out horizontal scaling later if/when needed :)


I've been using Kubernetes on hetzner that I installed with one command.


So how do you handle k8s version upgrades every three months? And please don't tell me "who needs upgrades" on a security-critical component.


The small business buys AS/400. IBM configures it. Drop it in a closet, forget about it.

If something goes wrong it will call for help, before you even know something was wrong.

It depends on what you define a small business as though.

Back in the days these were extremely reliable. I have seen them in closets, covered with everything else you would find in a closet.

Used by everyone day in and day out, nobody knew where it was or what it was.

It also, in my opinon, had a nifty operating system with a lots of inventive ideas when it came out.

I wish they had released the OS when they discontinued the line.


AS/400 lives on today as IBM i, the lifeblood of my employer (our billing system) runs on a POWER8 box running i and while I’m not a huge fan I appreciate the thing staying running without any fuss for years at a time (IPLs for periodic system updates and just to make sure it still boots notwithstanding).


The last ~10 years, i have had rarely any real hardware issues.

My current server is hardware from 2012.

Someone said that just a few weeks ago 'Cars you buy today are all good enough. You will not buy a car today and it will rust tomorrow.'

The only reason why i migrated my desktop pc to become my server, was the new DSLR which made Lightroom unusable due to the raw image size.

I might even think, that Computer Hardware in general is very very reliable. Besides the capacitor plague issue https://en.wikipedia.org/wiki/Capacitor_plague that gave systems a bad lifetime.

And i have plenty of systems here which are running for years and have seen plenty of ubuntu upgrades as well.


Small business + Kubernetes seems like a horrible combo. Spin up a $5 instance on DO and move on.


or spin up a small k8s cluster on DO, if you have few apps to run and do not want to spin / maintain separate instances etc.


We created the Kubernetes Production Runtime helps streamline the installation of a lot of the components mentioned in the article (nginx as ingress controller, grafana, cert-manager, etc. ). It is open source, check it out: https://kubeprod.io


This was a big inspiration for a sort of self-bootstrapping kubernetes "distro" I put together for my previous employer - just wanted to give a shoutout to all the neat things I see coming from bitnami.


Any plans for a quick start guide for DigitalOcean?


I am not sure what is being solved here. As if I wouldve asked for 4 seats, instead got a mansion, even though a car would be more in budget and useful.

Are we just creating complexity for the sake of feeling we are solving something? This system would still go down if there is a fire, which is IMO more likely. Rebuilding the cluster would take time, why not solve the issue of hand-made local deployments?


Highly available web-server with batteries for small business:

1. Goto digitalocean.com or linode.com

2. Buy the smallest instance.

3. Install FreeBSD.

4. Let it run for the next 2 decades without ever needing to touch it.


I am a huge FreeBSD nerd, but bro, that isn't even remotely true. Telling people BS like this actively harms FreeBSD and the community that uses/supports it.


A former client of mine had a Linode box running the app another of his connsultants hosted on some random Ubuntu Linux box. It ran for ten years with the occasional (once every couple months) restart. The ancient Python 2.x version the app used finally started causing problems because it didn't support TLS version one-point-whatever. A month ago I re-wrote the part of the app he actually needed and put it on a Linode box using the latest Debian. I suspect it will run with the occasional restart for another decade.

Get someone to host (and backup) your PostgreSQL database for you and don't sweat literally everything else. 99.999% uptime is for Walmart.


What about security patches, how does it work in the FreeBSD world?


Same way it works on Linux. The parent poster is exaggerating and greatly oversimplifying the situation.


Remote exploits in apache/nginx and openssh don't come along that often.

YMMV for the appserver and its dependencies.


Hello, just out of curiosity, is using the NFS Client Provisioner Helm chart necessary anymore for normal kubernetes deployments? My company was using it, but it became a bit of a hassle on GKE for our team, as we found that A) if you tried to reprovision the NFS Client Provisioner then it would no longer mount the NFS share directly, it would create a new subdirectory and the files in the top level of the NFS share would be inaccessible within our pods, and B) It did not support NFSv4. After some research we found that Kubernetes is able to utilize NFS shares natively these days. You can either mount a NFS share directly in a pod as long as its reachable in the network:

https://docs.docker.com/ee/ucp/kubernetes/storage/use-nfs-vo...

However, if you still would like to pre-provision the NFS share as a PV/PVC you are also able to do that and expose the NFS mount as a Kubernetes Object.

https://docs.docker.com/ee/ucp/kubernetes/storage/use-nfs-vo...

Just wondering, now that NFS mounts are directly available within Kubernetes, is there any advantages to utilizing a "3rd party" NFS Client Provisioner Helm deployment?

Anyways, looks like a really great write up. Although Kubernetes can be a big hassle for small teams, I feel like its slowly becoming more and more user accessible and user friendly to more organizations. Even with the mental overhead and hassle of running kubernetes, my team at work has seen some clear benefits from going to an ec2/ansible based architecture to a GKE/helm based architecture.


You are right the NFS client provisioner is not needed. This approach was chosen because the NFS server address only has to be configured once for the provisioner. Each PVC claims a subfolder within the shared NFS share. This might be undesirable for certain scenarios, but it works for us with zero friction.

Creating a PVC is as simple as:

  ---
  kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name: data
  spec:
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 5Gi
And because of this, it's compatible with pretty much all helm charts that rely on the default StorageClass that works without extra configurations or manual operations like creating shares and figuring out what NFS server to use.


I've been using the NFS client provisioner for my lab k8s cluster for a while. The reason I decided to use it was because I wanted to avoid having to create the shares for that service manually. I thought this would be useful for StatefulSets, which provisions a single volume for every pod, letting the application manage replication.

After a while, I've realized that it was a bad idea, because it turns out I don't run many services so creating a share for each service is a big deal, and that storing the StatefulSet volumes on a single server basically defeats the purpose of StatefulSets.

The second problem was that when I deployed it to a new cluster, I would get new paths on the NFS server, so the data from a previous cluster was not recognized. Since it is a lab environment, it is better for me to not bother with backing up the k8s cluster. With set paths, I can backup the NFS server, and redeploy a clean k8s cluster from the manifests and have the pods recognize previously written data.

As far as I can tell, the provisioner is mostly beneficial if you need to create volumes on the fly (although its possible that this is also possible with directly mounting NFS shares). On another note, have you looked at using one of the many ways of using persistent storage with GKE that isn't NFS? It seems that Google has created connectors and stuff for their own cloud storage that might be more straightforward to use.


* Highly available

* Kubernetes

* for small business

choose two


What is the value for a small business in Kubernetes?


A lot if you are using a managed Kubernetes solution. It is surprisingly easy and affordable to setup nowadays. We're currently running on DO and it is a pure joy.

- IaC, no servers to manage (Terraform + kubectl/helm)

- reasonable HA with minimal effort

- easily configurable metrics (prometheus)

- automated/minimal effort horizontal scaling

- easy SSL certificate installation and automated renewal

- operate multiple low traffic sites from a single cluster

- quickly access your environment through a single ide (https://k8slens.dev/)


This might be an unfair criticism, but prometheus didn't impress me. Last time I found that it cannot even compute "average CPU usage of all jobs whose name starts with X" - what it can do is to compute "average CPU usage of each job whose name starts with X" and then average these numbers.

So what's the issue, you ask? Well, if you're averaging over one week, and one of these jobs ran for exactly one hour with 1.0 CPU, then the average for that particular job is 1.0/24/7 = 0.00595! Now imagine averaging these numbers and wondering WTF has gone wrong.


1. IaC / no servers isn't a plus. It's vastly easier to SSH into a server and do anything you already can on a desktop

2. So do CoLocos, but better "HA" with no effort at all

3. /var/log + bash grep

4. Reverse proxy, just add another IP address and a load balancer

5. This is true anywhere

6. Nginx and apache both do this, dead simply

7. SSH + cli editor of your choice

Kubernetes does nothing new, but pad more resumes. The UX "design" of the dev-ops world.


1. You can't replace everything that Terraform does with "SSH into a server". For example, in Terraform you are specifying exactly what the compute resources are going to be used on the node and even SSH itself might be part of the software that gets provisioned.

2. "do CoLocos" definitely has effort and does not in anyway guarantee HA. (HA is really more of a software application layer concern not hardware) Also, are you telling me you are going to engineer an HA load balancer service better than the people who work for the major cloud vendors? Are you going to setup all your own service discovery, scheduling and everything else k8s does for your application?

3. Prometheus and other solutions are so much more than "/var/log + bash grep" you can't possibly seriously equate the two. Serious applications demand serious metric solutions.

4. Sure let me do all that while my site starts getting the HN hug of death or you know I could do nothing and let my infrastructure autoscale for me. Even in a non-autoscale setup, all you have to do is change a couple of numbers in your terraform and k8s config files and reapply.

5. No it isn't. For some webservers like Caddy it's easier than others, but in many cases you are going to have to a good amount of manual steps to get auto renewal working for your Let's Encrypt certs. On K8s it's basically a config file and a helm chart install. Plus you have K8s guaranteeing that it will restart the renewal service should it go down.

6. Yes you can specify config in your webserver for the routing, but being able to run completely distinct application servers on the same nodes handling all the routing and service resiliency takes a lot of effort, while not running into conflicts with your existing applications that are already running. You are going to get a lot better bang for your buck running multiple applications on the same set of nodes than having to provision separate nodes for the other applications.

7. SSH + cli does not give you a cluster wide view of your application. You would have to feed all that information into a single location to be able to view it, which would take a lot of effort and still wouldn't provide the same level of detail.

As a small business, I'd much rather have well engineered solutions that will be able to grow with my business. A lot of what you describe might be okay when your business is very small and has a single server (basically you can afford to treat it as a pet), but as your business grows you want to avoid having to rework various parts of your infrastructure if possible. Managed k8s gives you a lot of room to grow with relatively little upfront time investment.


Piggybacking on your comment, not agreeing with the previous poster.

Currently using all of the tooling you've mentioned on-premise and it's great! Using DNS Anycast (cloudflare/akamai/insert provider) and LB technology we can get highly available without much work. We've got more 9's than our Azure production setups, sadly though I feel like Azure isn't a good standard to compare to.

Bridging the gap from the cloud provider managed services to on-premise (colo) is actually relatively simple. By all means 95% of the tooling these providers use is Open source or has a major open source alternative.

SMB world does change that to your point as I doubt you'll have a devops rockstar who can actually grok these things, more than likely you'll have a dinosaur MSP or IT guy. (for really small companies)


None, people love overengineering.


Depends of what you consider a small business, a small SaaS company with 4-5 developers is that big or small?


Thats small, but most small business are not SaaS companies, nor do they have 4-5 developers.


I set these synology systems up from time to time, it's less headache just to use the synology web wrapper ui over open source tools. Doing this whole Kubernetes system seems like a lot of pain


Before clicking through I thought this was going to be a subscription service that wrapped cloud providers while reducing the amount of in-house knowledge and skillsets required to use K8s.


Regarding all the "small business's shouldn't use Kubernetes" - it depends on the business! Certainly a small website with a mailing list is better setup on Squarespace, PaaS/SaaS etc, but there are plenty of startup ideas that either rely on complete data control, users ability to manage their own apps, or a large infrastructure for hosting. A good example is indy game developers who release their game-servers as docker containers (see: Factorio, Minecraft, etc). The big players build out their own cloud, but that is a large barrier to entry. Plenty of game developers avoid multiplayer for exactly this reason. See Slack v Mattermost or Facebook v Mastadon for another example. Plenty of this bleeds between home-hosting and small business. Zoneminder, Mattermost, Wordpress - probably replaces a large bill for most companies, and keeps them 100% compliant with things like GDPR, etc.

If small businesses _did_ have a Kube cluster in their closets, and were able to decide the appropriate use of SaaS versus on-prem, and were able to carve up that compute securely to their customers, well it would enable a whole range of companies to compete with the big platforms. This is exactly what I'm building with my startup. When there is no real difference between hosted and self-hosted, besides the power and air conditioning, a huge new market for software opens, and the platforms that have (in my opinion) "ruined" the internet will have some real competition. The Internet doesn't need blockchain or federated compute to solve this - we just need people to run server-software again! All individuals and small businesses need are really good tools - the decision re: cloud or on-prem should be dependent on engineering resources, not capital resources.

Say what you will about Kubernetes - It reminds me so much of companies looking into Linux two decades ago. Someone would always say "eh, too complex, just use IIS and call Microsoft when things are broken!". To make the world simpler, you must first make the world more complex.


I'd argue k8s is overkill for most companies that would fit the M in SMB. Regarding the purported target group of small businesses: No way. No small business has even the need for something like this, I'd argue unless you run technical setups of FAANG proportions you absolutely don't have the need, and if you think you need k8s in a medium-size business please rethink if you really need a system engineered for Google Scale.


It's not about scaling the applications you run on it.

It's about providing leverage to your team so you need less effort to do more. And SMB can be anywhere up to 1000 employees if using semi-legal definitions, 500 employees and under 1 billion euro of turnover in case of EU's SME definition.

Depending on the area of work, there can be a lot of software to run. And that's where k8s comes in. It might seem "hard" when you're comparing it to "just throw few packages on a server", but having gone through that recently? The next time I'm setting up a single-node server to support tools for a project, I'm using k3s instead of spending 3-5 days getting software to play nice with paths changed to non-standard locations in order to keep data easy to backup and migrate to another server. Or wrestling load balancer (and ours is IaC even! Still not as nice and easy to deal with as nginx-ingress-controller...).

The real "scaling" of kubernetes is that once you pass the initial hurdle, the cost and complexity of adding more applications is greatly lowered. Even when you do a dumb lift-and-shift (like I did for >60 applications in one project).


> SMB can be anywhere up to 1000 employees, 500 employees and under 1 billion euro of turnover in case of EU's SME definition

By no-ones metric is 1000 employees a small business. The EU defines a small business as under 50 staff, and a medium business as under 250 staff.

https://ec.europa.eu/growth/smes/business-friendly-environme...


SMB is not "small business". It's "Small and MEDIUM Business". And even larger business is often supported by smaller subcontractors.

In this area, k8s had been personal saviour where we could not take on a project otherwise. Just thinking of dealing with ~70 applications that had subtly different requirements leaves me shuddering. And that was pretty small deal.


I’m a big fan of running proxmox on a desktop machine which makes running hundreds of containers super simple


Any resources on when does it make sense to go from cloud, managed, SaaS kubernetes to an on-prem deployment..?


Once you or your cloud costs are big enough to employ full time staff to look after servers, racks and network equipment and running your workloads. Before that it's most likely not worth it if it's not your core business.


I have never heard of batteries in this context.

Can someone explain what he means?


small businesses should use SaaS


Kubernetes is overkill for 99% of small businesses




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: