Hacker News new | comments | show | ask | jobs | submit login

The list of things mentioned in the article to do and learn for a simple, personal project with k8s is absolutely staggering, in my opinion.

Having used it, there's a sizeable amount of further work needed which the article doesn't mention (e.g. learning how to use the pretty confusing google interface, finding the right logs and using their tools). So the overhead is really huge.

Furthermore, the whole system is slow. Want to run a SQL query against your postgres? You need to use a google cloud command that changes the firewall and ssh's you in on the machine... and this takes a couple of minutes, just enough to make me desist unless I _really_ need to run that query. Abysmal.

Finally, and this is a pet peeve against many advocacy blog posts, they just show you the happy path! Sure, _in the best of cases_ you just edit a file. In a more realistic case, you'll be stuck with a remote management system which is incredibly rich but also a really steep learning curve. Your setup is not performant? Good luck. Need to tweak or fine tune? Again, best of luck.

We've tried to adopt k8s 3-4 times at work and every single time productivity dropped significantly without having significant benefits over normal provisioning of machines. {Edit: this does not mean k8s is bad, but rather that we are probably not the right use case for it!}

...which in turn is usually significantly slower than building your own home server (but that's another story!)




This. Time and again. The number of people who adopt complicated stuff like Kubernetes for what is essentially a couple of web servers and a database is too high. They're Google wannabies that thinks in Google's scale but forget that it is utterly unnecessary in their case.

I know a bio scientist who spent two months working on containers and Docker and what not, for what are essentially independent shell scripts that are best run as batch jobs. I spoke with him in length, and he realized at the end of the day that what he really needed was better understanding of standard *nix processes and not containers...


I had an excellent time working with kubernetes and I am practically a one person company. Kubernetes frees my mind from so many things that now I hate to work without it. Couple of those things include:

- automated ssl

- centralized logging

- super easy scaling up and down (going from 1 instance to 2 instance is a headache manually)

- centralized auth (have a service which doesnt have built in auth?)

- super easy recovery (containers recreated/volumes attached automatically/apps just work unless you have db type of application which shouldn't be just restarted, which is rare)

- Smooth CI/CD (gogs/jenkins/private docker image repository)

As for the "execute query" example, why is it such a headache ? I just "kubectl ssh <container>" and I am in.

> I know a bio scientist who spent

Super obscure example. Kubernetes is definitely not for non-tech people. And I didn't pick k8s overnight. Spent a year doing playful POCs before deploying in a production environment.

If the only thing thats stopping you from using k8s is the learning curve, I suggest you to go ahead and learn it. Its a huge boon.


Thanks for sharing your experience.

I question myself whether it is necessary to hide my operational needs behind a behemoth of complexity like Kubernetes. The list of conveniences you mentioned sounds like magic you get from Kubernetes. What if there is a problem with any of them?

Missing logs?

Inappropriate scaling?

Auth failures? or worse, failure of Auth system?

Easy recovery? what if there were failures to checkpoint/backup/snapshot containers?

CI/CD is good regardless of whether you use Kubernetes or not.

EDIT: The question is, if you have any of these problems, why is it better to get your head into how Kubernetes deals with those operations & tools rather than dealing with well-defined unix tools that specialise in doing these jobs. syslog instead of however Kubernetes gathers the logs, reading FreeIPA docs instead of Kubernetes auth system logs?

My point is, that to deal with all of the conveniences you mentioned, you need to know their details anyway. Why rely on Kubernetes abstraction if there is no such need? (I'm not trying to being snarky. I'm genuinely curious why you think it is a good idea. If you convince me otherwise, perhaps I would start adopting Kubernetes as well.)

I run my cluster(I'm a sysadmin) with

a couple of OpenBSD server that runs redundant DNS and DHCP.

a CentOS 7 box that runs FreeIPA as central Auth.

an OpenBSD server that acts as public facing SSH server.

about 20 nodes, all provisioned using kickstart files, and then configured using Ansible. They run databases, web servers, batch jobs, Git, etc.

A single server that runs ELK stack for log analysis.

A single server that runs Prometheus for quick aggregated monitoring.

Do you think I should switch over to Kubernetes for any benefits?


Sounds like what you have works, and Kubernetes might well not benefit you. With roughly 20 nodes, you have more or less 20 "pets" in devops speak and that sounds like an entirely sensible way to manage them. Contrasting with my problem...

I'm a sysadmin who manages thousands of bare metal machines (A touch less than 10,000 Linux boxes). We have gotten to a point in some areas where you can't linearly scale out the app operations teams by hiring more employees so we started looking at container orchestration systems some time ago (I started working on Mesos about 3 years ago before Kubernetes was any good). As a result, I got to understand the ecosystem and set of tools / methodology fairly well. Kelsey Hightower convinced me to switch from Mesos to Kubernetes in the hallway at the Monitorama conference a few years back. Best possible decision I could have made in hindsight.

Kubernetes can't run all of our applications, but it solves a huge class of problems we were having in other areas. Simply moving from a large set of statically provisioned services to simple service discovery is life changing for a lot of teams. Especially when they're struggling to accurately change 200 configs when a node a critical service was running has a cpu fault and panics + reboots. Could we have done this without kubernetes? Sure, but we wanted to just get the teams thinking about better ways to solve their problems that involved automation vs more manual messing around. Centralized logging? Already have that. Failure of an Auth system? No different than without Kubernetes, you can use sssd to cache LDAP / Kerberos locally. Missing logs? No different than without kubernetes, etc. For us, Kubernetes solves a LOT of our headaches. We can come up with a nice templated "pattern" for continuous delivery of a service and give that template to less technical teams, who find it wonderful. Oh, and we run it bare metal on premise. It wasn't a decision we took lightly, but having used k8s in production for about 9 months, it was the right one for us.


Sure, but the context here is

> Kubernetes Is a Surprisingly Affordable Platform for Personal Projects

with a counter that

> They're Google wannabies that thinks in Google's scale but forget that it is utterly unnecessary in their case.

I would posit that at the point you have over a hundred (or a couple hundred) servers, that "Google wannabies" applies much less and you have reason to use the Kubernetes features. But I wouldn't expect most personal projects to get anywhere near that threshold.

Hell, I bet the vast majority of personal projects happily sit on one server that does all their needs and they have room to grow on that server, or a larger instance of the one server. Possibly a second one spun up occasionally for specialized processing needs until it's destroyed.


I won't use the term 'vast majority' to stay conservative, but the many, many of enterprise projects would happily work on one server (well, let's make it two identical servers, for redundancy and HA). You can get 2U server with 1,5 TB of RAM, dozens of NVMe drives and tens of cores for really cheap nowadays.


And in this, we're entirely in agreement!

I run my personal web stuff in a docker container per application (running as a dedicated user per app via docker's --user=username:group) which iptables rules per user. Kubernetes would work, but is overkill for the 5 vhosts I run via a pretty stripped down Apache config.


> thousands of bare metal machines (A touch less than 10,000 Linux boxes)

This terminology is confusing to me as someone who's worked in the embedded space. In that field bare-metal implies not running an operating system. So does bare-metal Linux box mean you rent a box and stick a distro on it yourself? I feel like there could be more precise terminology used in that case...


Bare-metal in this context means that you have physical hardware and you're responsible for making sure the system can boot and do the stuff you want, as opposed to what you'd have with a service like Amazon's EC2, where you're given a set of apps to configure and execute a virtual machine image. The distinction is made because the former scenario requires extra work for initial configuration (in terms of OS installation and physical networking and such) and you have the burden of setting up automation to handle scenarios where your OS installation is hosed, and much more.


Bare metal in this context typically means your servers are actual physical servers, not virtual machines.


As others have stated below, in this context you have bare metal and you have the cloud. You could also add in virtul machines, which run either on premise or in the cloud.

Thousands of physical servers our company manages that aren't rented, but owned by us. Does that help?


Yes. Thank you! If only industry marketing hadn't coined the terms, we might have: cloud -> provisioned server, bare-metal -> managed server or some other less context dependent terms.


Just for the curiosity, "managed server" in this area is already taken, and means that you get a sysadmin with the rented server (the server is managed for you, including installation and maintenance of the software you need on it). It is "higher level" than cloud servers, not "lower level" ;)


It depends on what your operational needs are.

Do you need to do rolling deployments? Boom. Kubernetes pays for itself right there. And that's just tip of the iceberg. Do you need someone else to join you on ops duties? Boom. If they know Kubernetes, they can easily find their way around your system. Your bespoke system? They're trying to grok your lack of documentation.

It's a latency/bandwidth thing. The learning curve is the latency. The advantages it brings to the table is the bandwidth. As such, I think it's a great investment even for small businesses/teams, but it's not a good investment for a solo developer who doesn't know it, unless they want to be learning Kubernetes rather than building their project.


I think this attitude is part of the problem.

> Do you need to do rolling deployments? Boom. Kubernetes pays for itself right there.m

No, you don’t need to do rolling deployments.

> Do you need someone else to join you on ops duties? Boom. If they know Kubernetes, they can easily find their way around your system.

Easier than ssh’ing in and running ps?

Don’t get me wrong, k8 is great if you need to manage a complex system. It’s just that most of the time you can choose to not build a complex system in the first place.


> Easier than ssh’ing in and running ps?

Have you ever been thrown into an unknown environment? SSH'ing where exactly? Oh but this runs at hosting partner X? That runs at hosting partner Y? Oh but this service is actually behind a reverse proxy running on box Z. Oh you need to RDP to a jump-server to SSH to those machines? Documentation? You mean these outdated Word documents and Excel sheets? And down the rabbit-hole you go. Fun!

And don't say that doesn't happen, I'm a freelance sysadmin - and am exactly in an environment like that right now, it's not the first, and won't be the last. To get a complete picture here, I needed 2 whole months for everything outside the Openshift clusters.

The stuff running on openshift was pretty simple, there is the master, here are your credentials - and that's it. The rest is pretty easy to figure out if you've worked with k8s/openshift. The biggest problem was knowing if the app in question ran on Openshift or somewhere else.


Yes, you’re describing pretty much every environment I’ve ever worked in.

I’m not arguing that k8s is the wrong tool to manage that kind of complexity. I’m arguing that, in almost all cases, that kind of complexity is completely unwarranted.


Love that latency bandwidth explanation. Going to steal it for other contexts.


Do it! I use the latency/bandwidth thing as an intellectual model for all sorts of surprising situations. It's deeply embedded in my frame of reference for looking at the world, especially for doing any sort of work.

In a way, it turns all sorts of things into simple algebra - ax + b. So time = scope / bandwidth + latency. If latency dominates, it takes longer. If bandwidth dominates, goes faster.


amazing analogy


> The question is, if you have any of these problems, why is it better to get your head into how Kubernetes deals with those operations & tools rather than dealing with well-defined unix tools that specialise in doing these jobs. syslog instead of however Kubernetes gathers the logs, reading FreeIPA docs instead of Kubernetes auth system logs?

A unified interface for things like "find where this service is running". One standardised way to do it - even if Kubernetes were nothing more than a "install log system x and auth service y and dns server z" there would be a lot of value in that. You talk about "well-defined unix tools" but IME the unix tools are a lot less well-defined than container tooling - e.g. there are several different log systems that describe themselves as "syslog-compatible" but it's not at all clear what that means and not every application that claims to work with "syslog" will work with every service that claims to be syslog-compatible.

> about 20 nodes, all provisioned using kickstart files, and then configured using Ansible. They run databases, web servers, batch jobs, Git, etc.

Did you face the same questions from other people when you adopted Kickstart? What were your answers then?

The setup you've described is probably 80% of the way to Kubenetes compared to a traditional sysadmin approach. Kubernetes will save you a bit of work in terms of eliminating your "script for infrastructure changes like DNS & DHCP" (which I would suspect is one of the more error-prone parts of your setup? It tends to be, IME) and the need to manually allocate services to classes of hosts (ansible playbooks tend to end up as "we want 3 hosts in this class that run x, z and w, and 2 hosts in this class that runs v and y", whereas kubernetes is more "here's a pool of hardware, service x requires 2GB of RAM and needs to run on 3 hosts, you sort it out"). Whether that's worth the cost of migrating what sounds like a fairly recently updated infrastructure setup is another question though.


Cool. You have something bespoke. It works for you, but it's going to include a lot of toil when someone replaces you.

Now personally, I'd rather kubectl get pods --all-namespaces, figure out where the log collector is, what's wrong with it, and fix it, but instead I'm probably going to be reading your docs and trying to figure out where these things are by the time I've fixed it on a kube cluster.


I'm not sure I understand. Sorry, my exposure to Kubernetes is only a few days and is limited to an overview of all of its components and a workshop by Google.

> It works for you, but it's going to include a lot of toil when someone replaces you.

I was thinking that Ansible + FreeIPA(RedHat enterprise product) + Elastic logging setup(ELK) + Prometheus would be easier for my successor to deal with than figuring out my bespoke setup of Kubernetes(which keeps adding new features every so often). Even if I did not create proper docs(I do my best to have good doc of whatever I do), my sucessor would be better off relying on RedHat's specific documentation rather than guessing what I did to a Kubernetes version from 6 months ago...

If something breaks in FreeIPA or Unbound(DNS) or Ansible, it is very much easier to ask targeted questions on StackOverflow or lookup their appropriate manuals. They don't change as often as Kubernetes does. Don't you think?

Alternatively, if something breaks on Kubernetes, you'd have to start digging Kubernetes implementation of whatever feature it is, and hope that the main product hasn't moved on to the next updated release.

Is it not the case? Is Kubernetes standard enough that their manuals are RedHat quality and is there always a direct way of figuring out what is wrong or what the configuration options are?

Here I was thinking that my successor would hate me if I built a bespoke Kubernetes cluster rather than standard enterprise components such as the ones I listed above.


> I'm not sure I understand. Sorry, my exposure to Kubernetes is only a few days and is limited to an overview of all of its components and a workshop by Google.

No worries. The fact you asked the question is a positive, even if we end of agreeing to disagree.

> I was thinking that Ansible + FreeIPA(RedHat enterprise product) + Elastic logging setup(ELK) + Prometheus would be easier for my successor to deal with than figuring out my bespoke setup of Kubernetes(which keeps adding new features every so often). Even if I did not create proper docs(I do my best to have good doc of whatever I do), my sucessor would be better off relying on RedHat's specific documentation rather than guessing what I did to a Kubernetes version from 6 months ago...

So both FreeIPA and ELK would be things we would install onto a kube cluster, which is rather what I was commenting about. When either peice of software has issues on a kubernetes cluster, I can trivially use kubernetes to find, exec and repair these. I know how they run (they're kubernetes pods) and I can see the spec of how they run based on the kubernetes manifest and Dockerfile. I know where to look for these things, because everything runs in the same way in kubernetes. If you've used an upstream chart, such as in the case of prometheus, even better.

For things that aren't trivial we still need to learn how the software works. All kubernetes is solving is me figuring out how you've hosted these things, which can be done either well or poorly, and documented either well or poorly, but with kube, it's largely just an API object you can look at.

> Is it not the case? Is Kubernetes standard enough that their manuals are RedHat quality and is there always a direct way of figuring out what is wrong or what the configuration options are?

Redhat sells kubernetes. They call it openshift. The docs are well written in my opinion.

Bigger picture is if you're running a kubernetes cluster, you run the kubernetes cluster. You should be an expert in this, much the same way you need to be an expert in chef and puppet. This isn't the useful part of the stack, the useful part is running apps. This is where kubernetes makes things easier. Assuming your bespoke kubernetes itself is a different thing. Use a Managed Service if you're a small org, and a popular/standard solution if you're building it yourself.


Thanks for the patient response.

Reading through your response already showed that at-least some of my understanding of Kubernetes was wrong and that I need to look into it further. I was assuming that Kubernetes would encompass the auth provider, logging provider and such. Honestly, it drew a parallel to systemd in my mind, trying to be this "I do everything" mess. The one day workshop I attended at Google gave me that impression as it involved setting up an api server, ingress controller, logging container(something to do with StackDriver that Google had internally), and more for running a hello-world application. That formed my opinion that it was too many moving parts than necessary.

If there is a minimal abstraction of Kubernetes, that just orchestrates the operation of my standard components(FreeIPA, nginx, Postgres, Git, batch compute nodes), then it is different than what I saw it to be.

> if you're running a kubernetes cluster, you run the kubernetes cluster. You should be an expert in this, much the same way you need to be an expert in chef and puppet.

I think that is the key. End of the day, it becomes a value proposition. If I run all my components manually, I need to babysit them in operation. Kubernetes could take care of some of the babysitting, but the rest of times, I need to be a Kubernetes expert to babysit Kubernetes itself. I need to decide whether the convenience of running everything as containers from yaml files is worth the complexity of becoming an expert at Kubernetes, and the added moving parts(api server, etcd etc.).

I will play with Kubernetes in my spare time on spare machines to make such a call myself. Thanks for sharing your experience.


> That formed my opinion that it was too many moving parts than necessary.

Ingress controller and log shippers are pluggable. I'd say most stacks are going to want both, so it makes sense for a MSP to provide these, but you can roll your own. You roll your own, and install the upstream components the same way you run anything on the cluster. Basically, you're dogfooding everything once you get past the basic distributed scheduler components.

> I think that is the key. End of the day, it becomes a value proposition. If I run all my components manually, I need to babysit them in operation. Kubernetes could take care of some of the babysitting, but the rest of times, I need to be a Kubernetes expert to babysit Kubernetes itself.

So it depends what you do. Most of us have an end goal of delivering some sort of product and thus we don't really need to run the clusters ourselves. Much the same way we don't run the underlying AWS components.

Personally, I run them and find them quite reliable, so I get babysitting at a low cost, but I also do know how to debug a cluster, and how to use one as a developer. I don't think running the clusters are for everyone, but if running apps is what you do for a living, a solution like this is most for all levels of stacks. Once you have the problem of running all sorts of apps for various dev teams, a platform will really shine, and once you have the knowledge, you probably won't go back to gluing solutions together


>RedHat quality

caugh

https://www.openshift.com/


True. I cannot generalise like that. I was only thinking about RHEL and IDM(RH version of FreeIPA) - those documentation as super thorough and very helpful IMO.


https://access.redhat.com/documentation/en-us/openshift_cont... :)

Openshift is something Redhat is pushing heavily, and the documentation is extremely good. Also, several of the openshift devs frequently comment here on HN.


It’s very generous of you to assume the guy before you documented everything.


Indeed, that's part of the point. I first started using kube at a UK Gov Org about 4 years ago. All the units of the org has different standards for hosting, they weren't written down, and the vendors would quote unreasonble sums of money knowing that their systems weren't really supportable by anyone without specific experience of the system.

Kube was used to enforce standards, and make things supportable by a wider number of people, which is a weird thing to say since barely anyone used it at the time.

This was a large success though, as we only really needed to train people to use kubernetes once, and then it was over to worrying about the actual applications.


That's not so bespoke, that's a very much standard software, services and operating systems, which is very familiar to every UNIX sysadmin. It would be much easier to pass that system to new sysadmin than Kubernetes cluster. By far.


It's not that bespoke, but it is a bunch of building blocks glued together, rather than being a framework, and thus one needs to figure out which blocks you've glued together.

Your average guy with unix skills that works on small deploys probably doesn't want to learn kubernetes, once they work on larger deployments, they tend to convert in my experience.


I'm just learning Kubernetes now, but I've managed multi-thousand node VMware deployments, and a lot of these arguments seem to boil down to your entry point.

If your entry point is knowing Kubernetes really well but not knowing traditional stacks as well, the Kubernetes stack will probably make a lot more sense to you. Naturally, if you've never touched k8s, it will be a lot more confusing to troubleshoot a multi-tier app cluster on it than to troubleshoot on a traditional setup, even if you have to read the docs on the former first.

Obviously the abstraction has a lot of advantages; there are reasons people are moving towards it, but as in the case of this blogpost, I think a lot of people are using it as an entry point/shortcut for 'learning' systems that are more complex than they realize. That's not a bad thing; again, abstraction is great for many things. Hopefully, with time, they'll learn to troubleshoot the underpinnings as they get deeper and deeper into the tech.


The thing you know will always seem simpler than the thing you don't, but I think it's simply a convention versus configuration argument and where you want the abstactions.

Personally, I've used both, but many of the traditional stacks are manually configured blobs with abstactions in the wrong place and I'm happy that your average sysadmin no longer needs to build these.


This ^

As a former maintainer of the saltstack config management software (and still occasional contributor): https://github.com/saltstack/salt/commits?author=SEJeff

I find that kubernetes is a better fit for a lot of the applications which previously had dedicated hardware that ran at 0.00% utilization all day. It is also a much better user experience for developers as it was previously along the lines of:

1. User contacts linux team and asks for a server to run say a webapp but it needs redis

2. Linux team sees the ticket and realizes there is no hardware to run dedicated app, so they ask the Datacenter team to get a server.

3. There are no unused servers in the warehouse, so the Datacenter team gets purchasing to send a PO to the server vendor, and in a week we have a new server.

4. The Datacenter team racks the server, and goes back and forth with the Linux team until the network config is correct from a physical standpoint.

5. The Linux team builds out the box and configures the services the user requests, with config management and some fancy tunables for that user.

6. The user gets the server only to find a few things are misconfigured they don't have access to change and go back to step 5.

7. The app works, but the user has to ask the Linux team to update internal DNS for their new service (adding a CNAME generally).

The process now involves:

1. User creates container images of their apps and pushes to our internal registry.

2. User looks at the copious example application templates and makes their app run in kubernetes.

4. Their app runs, but if they have questions, the Linux team will help them with specific requirements. They get automatic dns via coredns, which runs in every kubernetes cluster and creates dns for each service that runs.

5. They spend more time prototyping new software than they do worrying about deployment.


Provisioning hardware still matters depending on requirements. I think you oversimplified a bit here. The user in your scenario probably needs to talk to the devops team and discuss expected resource needs unless it's really a tiny webapp that could have run on a shared server in the past anyway or in a VM.

K8s definitively helps with better resource usage and it can make deployment much more straightforward but it doesn't abstract the need to think of capacity and provision hardware.


We have LimitRange and ResourceQuotas setup per namespace in kubernetes. Each team talks with us about expected needs before we permission them with the ability to login to kubernetes and create a namespace for them to begin with. If a team needs more of either, we can up them. We have grafana dashboards of the cluster utilization and will proactively order extra servers to add capacity as needed. So far, so good!

I'm not pretending I'm not conflating things in this example, but for us, it was a way to solve a lot of "legacy problems" but forcing a move to a new paradigm. So far, everyone loves it (with a few exceptions!)


I never used Salt, but I've used Puppet extensively. Obviously you know virtualization and containerization existed before Docker and k8s, but you're simplifying and conflating a lot of stuff here.


The last process should should be numbered 1-4 (you skipped 3)?


Take your damned upvote! You're right, but I can't edit it now to fix.


but the situation could be the reverse: I replace someone who did a pile of k8s and didn't document anything, and while I could orchestrate the whole with libvirt recipes in my sleep, I wouldn't even know how to look up the command

> kubectl get pods --all-namespaces

so familiarity is not a good argument here


True. In order to know kubernetes, you need to know kubernetes. But the point is at least it's a framework rather than someone you or I came up with and never wrote down. But if you don't know it, you don't know it. The question is, is learning it useful?


> The question is, is learning it useful?

Even that question is too broadly posed. The question is, is learning it in addition to[1] the previously-standard tools useful?

[1] In the context of the article, this might be "instead of", which presents a much lower bar.


I don't see how libvert is easier to learn than kubernetes.

Neither one is obvious to a sysadmin who hasn't worked in either world.


I meant that familiarity with a framework isn’t an argument. Libvirt isn’t doing exactly the same thing as k8s and it isn’t easier or harder


In my opinion, operations are always a behemoth of complexity.

Kubernetes allows me to express myself concisely and effectively on the level of complexity I will eventually always encounter in the job, without the particularities of certain tools or manual execution of certain tasks, so for me the complexity is less with Kubernetes.


> The question is, if you have any of these problems, why is it better to get your head into how Kubernetes deals with those operations & tools rather than dealing with well-defined unix tools that specialise in doing these jobs. syslog instead of however Kubernetes gathers the logs, reading FreeIPA docs instead of Kubernetes auth system logs?

The author answers this--learning how to stitch together a bunch of disparate unix tools into a smoothly-operating system requires a whole bunch of professional experience. The author claims that k8s requires the same amount of experience but scales; personally, I think he's being too charitable (or to take a different perspective, he undervalues the difficulty of being a traditional sysadmin). Perhaps he meant "for a single machine". In my opinion, once you extend beyond one machine, k8s is quite a lot simpler.


Can I ask if your nodes are "in the cloud" (ie AWS/rackspace) or in _your_ datacentre?

Can you go touch your boxes? Who can add a new one and how long.

As i understand it k8s is kind of designed for renting a bunch of AWS boxes and just having "my" cluster look like it operates seperately (the traefik router proxy comes to mind as something K8S should do)

I speak as a complete K8s novice.


Our infrastructure is in house - research group in a University setting. Public cloud would be an absurd option for our needs economically, considering that we run our servers practically 24/7 albeit with convenience of scheduled down times, and that our servers tend to live longer than even 4 years in service. For example, some of our compute nodes after 5 years of service are now servicing users as "console servers" for users doing scientific work in the command line. We run bare-metal servers for performance intensive workloads and databases, and KVM based virtual machines for other services.

> Can you go touch your boxes? Who can add a new one and how long.

This is surprisingly not that long. I can order new machines and they are delivered for me to use in just 2 weeks. With a little bit of planning, I can squeeze a lot of performance out of them. For example, I can avoid noisy neighbours because I can control what is deployed where physically.

Another benefit of non-cloud is that I can customise the machines for their purpose. I recently built a FreeBSD/ZFS based server and chose a cheaper CPU and a lot of faster RAM to deal with. For DNS servers, queue servers and such I chose a CPU that has a higher single threaded performance at higher clock rates than ones with more cores and slower clock rates.


I see - sadly for my personal projects I can either pay for cloud hosting, or I can pay for filling up our spare room with noisy servers, cat5 cabling and general chaos.

I know which price is more expensive :-)


how often do you update your nodes?!


All nodes are bare-metal servers that run CentOS 7, and are configured strictly via Ansible. If a node experiences a hardware failure, we just pick another spare server and run our Ansible playbook on it + a script for infrastructure changes like DNS & DHCP.

Our workload is not strictly attached to physical resource. So, we have our updates scheduled for every week, with the condition that updates run when there are no user tasks scheduled on them.

The other nodes that serve a specific purpose - like database servers, app servers etc, get updated regularly for security updates as & when they are available and necessary, and checked for version upgrades (with scheduled downtime) once every 4 months.


It sounds like the prior commenter Ansiblized their environment, they likely update the 20+ machines often since its not much of an effort.


> As for the "execute query" example, why is it such a headache ? I just "kubectl ssh <container>" and I am in.

It's slow as hell.

> I had an excellent time working with kubernetes

I'm happy for you, seriously, and I'm not claiming general validity of my experience. What I am disputing is the blog post (which does claim general validity).


The learning curve is sharp. The amount of things happening you don't have awareness of is also worrisome (to me).

Source: I am also using Kubernetes in production, migrating off it soon.


It would be interesting if you could briefly say your reasons for moving away from K8s and what you're moving to.


Sure.

The largest reason is cost. My small deployment (3 nodes) is running around $100 / mo on AWS (That's my app, nginx, redis, and postgres).

It doesn't even need 3 nodes, I don't recall if 3 is the minimum, but realistically I only need one (for now). For larger projects this is probably a non-issue.

Second largest reason, is that really I have no idea what is going on on these nodes, and I probably never will. Magically my services run when I have the correct configurations. Not to say that's always a bad thing, but I've found it difficult to determine the default level of security as a result of this.

A third reason is the learning curve. This is less of an issue because I've invested the time to learn already. But like, the first time I tried to get traffic ingressed was painful.

As to what I'm moving to, I migrated one of my websites to a simple rsync + docker-compose setup and am pretty happy with it. In the past, I ran ansible on a 50 node cluster and it worked really well.


I'm not really clear on how moving off Kubernetes saves you money in this scenario. Seems like the most likely source of the cost is the AWS, which is the non-free part. I'm just learning k8s, so I feel you on the learning curve.


Kubernetes has a minimum node count. Moving to one node, saves cost, by a factor of 3. Not to mention all of the other resources it creates (load balancers).


To play devils advocate, why not use AppEngine or Heroku, that does most of these things and you don't have to manually setup each of the apps with long YAML configs involving resource constraints and restart policies?


I gather it's simply less popular because they're proprietary, opaque and opinionated.

I suspect we're going to continue to see layers built on top of kubernetes, such as gitlabs autodevops, where for the simple things, you don't need to write any of that, but you can still inspect what they create, and jump off the rails for things that are a bit more complex.


Hi, GitLab PM here, thanks for the mention. We're indeed trying to make use of kubernetes inside GitLab as simple as possible. And just to add to your comment, you can try out kubernetes integration (https://docs.gitlab.com/ee/user/project/clusters/) independent of auto devops and vice-versa (https://docs.gitlab.com/ee/topics/autodevops/). Both of these are available in our core offering.


Honestly, this is why so many people just use Heroku.


> I am practically a one person company

> Spent a year

> go ahead and learn it. Its a huge boon.

for green field deployment with a 1 year R&D budget for devops, sure K8s is great choice perhaps, but for the rest of us (even in tech)?


Would it be possible to elaborate on centralized auth with an example? I've done a small amount of playing around with k8s but I'd not heard of this specific use case.


I really think this kubernetes is complex meme needs to die.

I mean, sure, kubernetes _is_ complex. But it's complex in the way that using Rails is more complex than pure Ruby. It's fine when you're playing, or you have a weird subset of problems where you don't care if someone runs or not, but as soon as you deal with the problems of HA, DR, live deployments, secret management, sharing servers, working in a team, etc then if you're solving these issues with ansible and terraform, you're probably just inventing something as complex, and worse, bespoke to your problem.

At the end of the day, once you've learned to use kube, it's probably no worse than any other configuration as code based system, and the question is which of us should be learning it and when, and which of us should just be using higher levels of abstractions like gitlabs autodevops.

Now, indeed, if you're just hacking and you have no need for a CI and CD pipeline, or reproducibility, etc, then sure, it's probably not the time to learn a hosting framework, but once you do know it, you're free to make an informed opinion on your problem.

Personally, I sell kube a lot, but I tend to work for orgs that have several development teams and several apps, and bringing them to a platform seems like a good idea. The question is, should I also put a startup on GKE, or should I do something bespoke where they can't hire another contractor to take over as easily? Personally, I'd go GKE.


Managed k8s (e.g. GKE) and unmanaged k8s (e.g. kops/kubeadm etc) aren't really the same thing in terms of complexity.

GKE takes a lot of the load off by making it Google's problem. The possible downside (which may or may not matter depending on use case) is that you're tied to what Google's control plane provides.

So if you want a shiney admission controller, you're out of luck unless they support it.

But for basic workloads, it's likely to work well and their management solves a lot of vanilla k8s pain points.


This. It's perfectly fine to use GKE or AWS. Trying to set up your own cloud provider for a small project? Crazy.

The amount of hours you have to spend learning how k8s works just to set it up properly from scratch will far exceed the r&d and operational cost of a less sophisticated small project set up from scratch.

Pay for managed services, or do it as simple as possible.


Hello, Community Advocate from GitLab here. Thanks for mentioning our Auto DevOps.

One of it's main functionalities is to eliminate the complexities of getting going with automated software delivery by automatically setting up the pipeline and necessary integrations, freeing up yourself to focus on the culture part. That means everyone can skip the manual work of configuration, and focus on the creative and human aspects of software creation.

Here's the doc with more info about it https://docs.gitlab.com/ee/topics/autodevops/


You also give us a kubernetes chart that makes it a 5 minute task to install your own gitlab.

Jobs actually run in their own kubernetes pods, and since this is seperate to the master, it helps go a long way to actually having something that resembles a secure SCM, CI and CD.

I don't work for for gitlab, just someone who steals their hard work and sells it to clients. :)


> This find | xargs mawk | mawk pipeline gets us down to a runtime of about 12 seconds, or about 270MB/sec, which is around 235 times faster than the Hadoop implementation.

https://adamdrake.com/command-line-tools-can-be-235x-faster-...


Pipelines of gunzip, find, grep, xargs, awk etc. on RAID disks... good memories. Analyzed terabytes of data with that. Hard to beat because of the zero setup time.


And now your customers expect to see new data immediately reflected in their account. What then?


It depends.

If you have one customer who need it once a week, you add this find-grep-awk script to xinetd and set the PHP page with the couple of fields setting the arguments for the request.

If you have a million of customers per hour, you setup a bunch of terabyte-RAM servers with realtime Scala pipeline, and hire a team to tweak it day and night.

Because that's a two very different problems. And the worst thing to do is to try to solve the problem X with a tools for problem Y.


All those tools are stream processors, why wouldn't the customers see the new data immediately reflected in their account?


Spawn as many servers as you wish, spread your data between them, run your scripts, generate reports, show it to the customers.


Pipe it to a websocket, or curl to some update-account API. Or a mysql/psql/whatever CLI in CSV upload mode so you don't have to worry about injection.

If you want to batch on more than lines, use sponge, or write another few lines of mawk/perl/whatever.

Those are limited examples, and may not always be The Right Way (tm), but there are certainly easy, old, simple ways to take shell pipelines and make the data available quickly in some other store.


That's actually very simple, and there are many ways to do that. If the nature of the task you deal with allows this kind of workflow, it's really worth considering. These days I would use a more proper language like D as a wrapper rather than Bash itself, fore greater flexibility.


IMHO the point for running Kubernetes for personal projects would be to share the same virtual/physical machines between multiple projects while still keeping things manageable.

Over the years you easily end up with a pile of these projects, which want to keep running mostly for fun and therefore want to minimize the associated fixed costs. Using containers may help in keeping the herd manageable so that you can, for example, move everything to some other hosting provider.


Precisely. I use Docker for my personal projects whenever possible, because it keeps the host tidy and avoids conflicts. If I want to play with something new, I can just throw it onto an existing server and start playing, rather than deploying a new VM specifically for it. When I get bored, it's a 2 minute job to trash the container.


Not that the author is right, but he does address exactly these criticims. Besides, that someone used Docker inappropriately isn't evidence that Docker is the wrong tool for personal projects (again, it may be true that Docker is the wrong tool for personal projects, but that is not evidenced by your bio scientist friend scenario).


> ... for what is essentially a couple of web servers and a database

Interestingly, while the author compares k8s to an SQL database, they actually do not deploy a DB. It's all fun an giggles until you deploy a DB.


As someone who's just starting to learn kubernetes (but has a strong background in virtualization and multi-thousand-node environments), I was inclined to agree at first but now see it as more of a modern "home lab" type person. I don't see it as doing it to be a fake-Googler, but more to learn the scalable way of doing things.

Or, as worst, the shortcut to understanding the underlying tech. But, isn't that the point of abstraction?


same here, you really have to look at your own needs. i was thinking about using something like that, but at the end of the day i would waste time and energy and complicate my setup a lot for no benefits. We are simply not at that scale, and dont have those problems, and as much as i would like to play with new technology (and i do that privately) cost/benefit calculation for us is not working right now.


> They're Google wannabies that thinks in Google's scale

and the people who, when they show up at their Google interview, nail the part about scaling. Not that Google ever asked me how to scale a project, but it doesn't hurt learning things just for the sake of learning.


Sorry you quoted me in partial context. I said,

> They're Google wannabies that thinks in Google's scale but forget that it is utterly unnecessary in their case.

Meaning they _unnecessarily_ think in Google's scale when there is no need for it.


I see! My bad then :)


> the people who, when they show up at their Google interview, nail the part about scaling

Having interviewed many tenths of engineers at Amazon - no, not really.


Isn't someone who played with e.g. k8s more likely to understand scaling than someone who never cared about it though?


I think the opposite. K8s hides so many things under layers of NAT that you really dont understand what does and doesnt work.

The 5-15 issue is a self inflicted wound k8s brings to the party that is quite interesting to work around

https://tech.xing.com/a-reason-for-unexplained-connection-ti...


k8s hides away some scalability problems and does not address many others.

Most real scalability issues exist on lower levels and require more theoretical thinking than experimenting with tools.


That's kinda like saying people who use postgres understand query performance more than people who use mysql.


> people who use postgres understand query performance more than people who use mysql

This statement strikes me as likely to be true, at least for database non-experts.

It was certainly true for me. I didn't have anything approaching a decent understanding of query performance until I discovered posgresql's EXPLAIN and, perhaps more importantly, EXPLAIN ANALYZE.


I think it's more like postgres vs csv.


When I got involved with it (and to be fair, most devops things) the value was more in what I was learning for myself, as opposed to what was suitable for the business. Because it's pretty exciting tech and everyone's talking about it, and it doesn't feel like a rabbit hole until you pull yourself back out and see the light again.

So, what happened almost every time is that the business unknowingly pivoted into innovating on PaaS, because Docker/Kubernetes was only the first step. After that, you had to figure out the best approach to CI, how to set up your deploy pipelines across multiple environments, how to replicate what you were used to seeing with Heroku, etc.

And of course the cost balloons at that stage because your cheap 3 node setup ends up exposing half a dozen load balancers that each cost money, so you start to think about hosting Traefik or nginx-ingress-controller so you can manage load balancing yourself and save some money, and because it's so easy to add more services once you know how, you start wanting to self-host all the things.

Meanwhile your startup hasn't even launched to the public yet and the sunk cost fallacy has fully embedded itself in the team mentality: they've just put months of time and effort into setting up the perfect K8S/Docker architecture and workflow, that now requires actual devops experience to maintain, and you can't push back on it because it's all about scaling when things go live, and self-hosting, and how convenient it all is.

Except, you know, that's 3-6 months of runway down the drain because leadership didn't shift the focus back to an MVP and let the fantasy continue. And it would be hard to justify anything like Kubernetes for pushing an MVP out of the door; that's what Heroku and cheap Digital Ocean boxes are for.


> And it would be hard to justify anything like Kubernetes for pushing an MVP out of the door; that's what Heroku and cheap Digital Ocean boxes are for.

Exactly. I was following along with the article until he mentioned that startups are doing this. Using k8s at a startup (especially a startup building an MVP) is just like using microservices. It is a disservice to the business by technologists.

Building the monolith on heroku will get you to market faster.

Yagni. (Until you do, and thrn by all means pay off that technical debt.)


Totally disagree. Our startup use Kubernetes and Google Cloud and shipped an MVP in a few months. No operations people only devs and devops. Kubernetes is not perfectly set up yet and the logging has failed sometimes but we ship code to production many times per day and are really happy about not having to care about infrastructure that much. Three nodes and Cloud SQL (PostgreSQL) goes a long way.


How much money do you save vs using Heroku?


Did you work at my former employer? because that is exactly what happened there. No product launched, full k8s and CI/CD crazyness.


It is sort of funny to hear people decrying architecting things properly from the start (for certain values of properly) vs the usual "do it half-assed and solve technical debt later" method.


I can recommend CaptainDuckDuck for simpler use cases: One-Click-Deployments (e.g. MySQL, PostgreSQL, MongoDB, ...), easy-to-use CLI tool, automatic HTTPS, Docker Swarm support (cluster and scaling), web GUI and open-source.

https://captainduckduck.com

https://github.com/githubsaturn/captainduckduck

It made it so much easier to deploy new changes (just `captainduckduck deploy` and done). We also use minio.io (open-source S3) and are extremely productive with those tools.

It's perfect for web agencies. It's not sophisticated enough for k8s' use case, but it's extremely easy to use (you can just add nodes and it will restart containers automatically [1]).

[1]: https://captainduckduck.com/docs/app-scaling-and-cluster.htm...


From https://captainduckduck.com/docs/get-started.html

D) Disable Firewall Some server providers have strict firewall settings. To disable firewall on Ubuntu: ufw disable

WTF?


Kasra from CaptainDuckDuck is here.

Getting Started section is aimed for beginners. You surely don't have to disable firewalls entirely. There's a section in the docs outlining the ports that are being used by Captain.


This looks interesting. I hadn't heard of CaptainDuckDuck. Have you tried Dokku or Flynn(flynn.io)? I had looked at them and settled for Flynn because it had better support for multiple-host clusters.


I looked at Dokku and Flynn for personal (single host) projects and find both to be too complicated for my needs. I generally prefer deploying Docker containers over Heroku buildpacks.

I ended up just running a single Digital Ocean droplet with Traefik as a loadbalancer/entry point, and then running each of my projects with docker-compose. (And building each project with Docker)

With Traefik I can set up reverse proxying for each project just by adding a few labels to its docker containers, and Traefik manages LetsEncrypt for me.


Flynn seems to have stalled, unfortunately. It was a pretty straightforward setup, and I got apps running on there rather quickly. There's still no real support for letsencrypt, which is kind of a killer.

Dokku has no support for clustering, but it's amazingly simple to set up and start deploying apps. (Digital Ocean has a pre-built and up-to-date image ready to go.) I like it for the heroku-style buildpack deployment, although it also has dockerfile-based deployment. Great little system for low-risk projects.


Tried Dokku in the past, but it was too barebones for me. Flynn looked also good, but it seemed to complicated (more features than we need) and we settled for CDD.


What functionality was Dokku missing for you? It's has quite a few features now, and we're adding new functionality all the time.


Mainly a web interface. I never gave dokku-man a try, but it doesn't look as feature-rich as the web interface for CDD.


Is that something you'd be willing to pay for?


Do you mean theoretically? We already use CaptainDuckDuck. I can imagine to give a bounty for it, but I don't intend to switch anyways.


I mean theoretically, but makes sense.


Agreed 100x. I find it very hard to understand why some people talk themselves into thinking they need a distributed container orchestration and management platform, when they don’t have a container orchestration problem: they just want to run an app or two with a DB under it.

We need to go back and understand the problem we’re trying to solve, and for 99% of smallish companies and projects this is not container orchestration or hardware resource abstraction.


> an app or two with a DB under it

And most often all those app or two are doing is converting JSON to SQL and SQL to JSON actually. And PostgREST would be pretty much enough for that.


You don't need to run the gcloud command to connect to a Cloud SQL instance. You can run the cloud_sql_proxy as a background process on your machine and then you can use mysql or psql to connect to it instantly.


I guess your answer supports the parent's case: the happy path is a charm, but you need to learn a whole lot more to use it effectively. Paraphrasing the tag-line of Mesosphere, it's a new OS, one that is targeted at data centres, not small pet-servers setups.


I've had good experiences using Rancher (https://rancher.com/) to operate k8s clusters. It provides a nice management interface, which handles common use-cases really well. Grabbing per-container logs or getting a shell into a container becomes trivially easy.

Admittedly, operation via Rancher comes with its own complexities: setting up security groups and IAM roles. These are documented of course, but as you say: minimum effort remains non-trivial.


Can't really speak to the cloudsql issue... that's not really kubernetes. Running SQL as a container you can just kubectl port-forward.... but it sounds like a specific issue you ran into?

I'll grant that there can be headaches that I don't really see anymore because I was deeply immersed in it for a while. But running a linux server by itself was hard till I learned that too.

Performance is a subtler point, not sure I follow what you're trying to say there. Container overhead? At least in my experience gke seemed responsive, and deployments settled pretty quick.

Tweaking is usually a strong point of k8s. Just reapply... so I'm probably misunderstanding what you were getting at here.

Kubernetes needs to improve their accessibility to newcomers but the happy path isn't unheard, for simple stateless apps it mostly just works. And when it doesn't I can usually kubectl log and kubectl delete pod my way to a solution and there are docs and stack overflow to help.

Not perfect but I was surprised it worked as well as it did.


> Can't really speak to the cloudsql issue... that's not really kubernetes. Running SQL as a container you can just kubectl port-forward

Best solution is Cloud SQL Proxy: https://cloud.google.com/sql/docs/postgres/sql-proxy


Where I work we adopted DCOS and it has been nothing but a pleasure to use and we have been updating the cluster quite often. Always very smooth and no pain at all.

We thought about k8s a couple of times but it always looked too over complicated and no gain compared to our use of DCOS


You are right in the large.

That said, once you learn all of these k8s concepts and mechanisms, and probably script nearly all of them, they all become like breathing and don't really take much time.

I frankly now am at the point where it is faster to spin up a small project on k8s than it would be to do so with the cloud providers PaaS platform.


> and every single time productivity dropped significantly

I'm skipping job postings which require k8s. It just tells me they're not competent, falling for the hype. For AWS, ECS is way simpler and is a fully featured orchestration tool. I had an interview today, where I tried to educate why simple is better, and why they don't need k8s. No luck :)


Hi,

It can be rough to manage Kubernetes when you just want to focus on your app. I was having that same problem when building side projects and focused on building a platform to deploy dockerfiles on a managed Kubernetes instance.

https://usekube.co


Don't forget that K8s changes rapidly so whatever you learned a year ago might be completely useless now. So better keep those people dealing with K8s learning by all means or have fun with broken production.


That is not true, based on my experience. I started to learn and use kubernetes about a year ago (now we are running our production systems on kubernetes) and we are usually on the latest version. I cannot mention anything which I learned about kubernetes and because useless, because something changed.


It's less true now, but a lot of older blog posts and books (at the time, even the most recently published books) talked about managing via replica sets instead of deployments. That was a major source of confusion for me when I was getting started.


For context: I was extremely lucky to get to work with Caleb back in 2012-13. He introduced me to Go and my life has never been the same :D

Properly hosting custom apps on the web in a resilient way isn't easy. This being the case, it seems an error to interpret the thoroughness of the article as a signal that the proposed solution or situation is unreasonably complex.

I'll take detailed and exhaustive over sparse and hand-wavy any day!

Kubernetes certainly has a learning curve, just like most worthwhile things in life.


Hosting personal projects being the task at hand, super high availability isn't usually a headline requirement.

That being said, there are certainly existing solutions for doing that, for cheap or even free tiers with less effort - Heroku, Google App Engine, etc.


welcome to what it's like to work at Google


borg is easy compared to k8s and even a hosted service like gke.


tl;dr if you're a team of developers sitting around a table shouting at your monitor because K8s isn't working, hire someone to do the job right for you. I think you'll find it's pretty easy and utterly worth every penny invested in it.

> Edit: this does not mean k8s is bad, but rather that we are probably not the right use case for it!

Or it could mean you're not the right people for it?

Do you have a dedicated individual or team that represents infrastructure engineering? Are you a team of developers trying to play the Ops game? More often than not people/companies that struggle to get K8s going or use it are a team of developers who's expertise is code and not infrastructure/automation.

K8s is hard when done from scratch (there's a book and a video course about that by THE K8s guy.) It's not hard when you use Kops, Tectonic (discontinued, I believe), EKS, or one of the other managed services. Implementing a YAML file and launching it isn't hard. Also, you don't have to understand it inside out; you only have to understand how-to use it. Can you tell me how every component in your microwave works or do you just get on with cooking your food?

And to say you're not the use case for it - that's highly unlikely. I mean, it's possible, but it's unlikely. Your work loads and software solutions would have to be highly unique and or specialised because hey, it's not like Google's setup is simple, right?


But for personal projects, which is the context of TFA, you're not going to hire a k8s guru. As others have already said, as long as your k8s setup is working as expected, everything is fine (the happy path). But if there ever is a problem with it, then you're confronted with insane complexity, and said tools/setups on top of k8s won't fix it for you; they're even working against you as you won't have learnt the k8s foo for diagnosing wtf is the problem. And in my experience, problems occur rather sooner than later. For example, just the other week a customer of mine had their IPs exhausted on their Azure cloud. Azure and k8s will just silently fail with timeouts and all kinds of weird errors but won't tell you the root cause, and it took two experts, a fresh setup of the k8s cluster from scratch, and almost a week of downtime to fix it. It doesn't help that you only have limited diagnostics on other people's clouds either.


The content of the comment wasn't that of a personal project. It was the context of an organisation, so I'm addressing that concern directly.


> Do you have a dedicated individual or team that represents infrastructure engineering?

Great idea! We could call that team 'Operations' and they would do all that stuff so developers don't have to!


source? not finding anything for "the k8s guy" on google other than this post.


"Kubernetes the hard way author": https://github.com/kelseyhightower


thanks, I'm currently reading Kubernetes Up and Running.


If you design your app properly you won't ever need to run SQL queries by hand. This is a bad practice and it is good that k8s discourages such behaviour. I agree with your other points though.


SQL is too low level?! I don't think you understand what is going on with the tools you use.


He's advocating using ORMs rather than writing raw SQL queries


Okay, now what does Kubernetes have to do with ORMs? Why does it even have a say in what applications we're running on top of it? Am I missing something?

In another tangent, I still use raw SQL queries in most cases over ORM. That's just me. May be I'm a control freak or I just dont know how to magically use the abstraction of ORM and still get the most optimised results.


In a complex environment, that will not always work. We use an ORM at work for all simple CRUD stuff, but when you want something more complex and performant, we end up writing sql anyway.

(On a db the with millions of daily transactions)


I am not. I am sorry if that sounded ambiguous. By "By hand" I meant SSH into a container and then pasting raw query into SQL client. ORM will not always help you achieve what you want so writing queries is alright, but you need to have tests too. You can create a command for your app (like a Django command for example) that performs the query, but you also create tests that prove it is doing what you think it is doing. Then you run this command in the cluster. This way you can replicate it to different environments etc.


I am happy to write SQL, but you need to write it in such a way that you have tests confirming your query does what you intend to. Then run a query as a short lived application on the cluster. You don't need to SSH even for low level stuff. I am sorry if I didn't make it clear.


That a rather naive position. Over time one periodically needs to poke individual instances to see what's going on in them even if just to ensure consistency of what the aggregation sees vs what the instance is doing before destroying the instance one has logged into.


You have no idea what you're talking about to make a blanket statement like that.


I think you are confusing the experience as a user of the app (who shouldn't really ever touch SQL) and the experience of a developer (who should be knowing SQL inside out and using it in their day-to-day work).


"If you design your app properly" applies if you're talking about using k8s to deploy an app.

If you're doing individual development, however, it's quite likely that you're also doing development of the database, and need to do ad-hoc data analysis before you include those queries in that app.


Never needed to use SQL for debugging, or just generally checking that what you have in the database is what you expect?


"I have a problem doing X"

"Then don't do it. It only shows you're an idiot anyway."

Not a good answer.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: