Hacker News new | past | comments | ask | show | jobs | submit | pescerosso's comments login

:-)


I understand where you’re coming from, and ideally, we strive for well-managed Kubernetes environments. However, as DevOps practitioners, we often face complexities that lead to stale or orphaned resources due to fast deployment cycles, changing application needs or teams. Even the public clouds make lots of money from services that are left running and not used for which some companies make a living helping clean things up.

K8s-cleaner serves as a helpful safety net to automate the identification and cleanup of these resources, reducing manual overhead and minimizing human error. It allows teams to focus on strategic tasks instead of day-to-day resource management.


> However, as DevOps practitioners, we often face complexities that lead to stale or orphaned resources due to fast deployment cycles

So, as a DevOps practitioner myself, I had enough say within the organizations I worked at, who are now clients, and also my other clients, that anything not in a dev environment goes through our GitOps pipeline. Other than the GitOps pipeline, there is zero write access to anything not dev.

If we stop using a resource, we remove a line or two (usually just one) in a manifest file, the GitOps pipeline takes care of the rest.

Not a single thing is unaccounted for, even if indirectly.

That said, the DevOps-in-name-only clowns far outnumber actual DevOps people, and there is no doubt a large market for your product.

edited: added clarity


> I had enough say within the organizations I worked at, who are now clients

This sounds like experience that’s mainly at small/medium sized orgs. At large orgs the devops/cloud people are constantly under pressure to install random stuff from random vendors. That pressure comes from every direction because every department head (infosec/engineering/data science) is trying to spend huge budgets to justify their own salary/headcount and maintain job security, because it’s harder to fire someone if you’re in the middle of a migrate-to-vendor process they championed, and you’re locked into the vendor contract, etc etc. People also will seek to undermine every reasonable standard about isolation and break down the walls you design between environments so that even QA or QC type vendors want their claws in prod. Best practice or not, You can’t really say no to all of it all the time or it’s perceived as obstructionist.

Thus there’s constant churn of junk you don’t want and don’t need that’s “supposed to be available” everywhere and the list is always changing. Of course in the limit there is crusty unused junk and we barely know what’s running anywhere in clouds or clusters. Regardless of the state of the art with Devops, most orgs are going to have clutter because those orgs are operating in a changing world and without a decisive or even consistent vision of what they want/need.


> At large orgs the devops/cloud people are constantly

Two of our clients are large (15,000+ employees, and 22,000+ employees) orgs. Their tech execs are happy with our work, specifically our software delivery pipeline with guard rails and where we emphasize a "Heroku-like experience".

One of their projects needed HiTRUST, and we made it happen for them in under four weeks (no we're not geniuses, we stole the smarts of the AWS DoD-compliant architecture & deployment patterns) and the tone of the execs seemed to change pretty significantly after that.

One of these clients laid off more than half their IT staff suddenly this year.

When I was in individual contributor role in a mid-size (just under 3,000 employees), I wrote my thoughts, "internal whitepaper" or whatever being fully candid about the absurd struggles we were having (why does instantiating a VM take over three weeks?), and sent it to the CTO (and also the CEO, but the CTO didn't know about that) and some things changed pretty quickly.

But yeah, things suck in large orgs, that's why large orgs are outsourcing which is in the most-downstream customers' (the American peoples') best interests too -- a win-win-win all around.


A tool that would be useful for organizations that don't have superstar, ultra-competent devops people on the full-time payroll sounds pretty useful in general. There are a lot of companies that just aren't at the scale to justify hiring someone for that kind of role, and even then, it's hard to find really good people who know what they are doing.


> for organizations that don't have superstar, ultra-competent

Just outsource.

Outsource to those who do have DevOps people who know what they're doing -- most companies do this already in one form or another.


How do you realize that you have stopped using a resource? Can there be cases when you're hesitant to remove a resource just yet, because you want a possible rollback to be fast, and then it lingers, forgotten?


With our GitOps patterns, anything you could call an "environment" has a Git branch that reflects "intended state".

So a K8s namespace named test02, another named test-featurebranch, and another named staging-featurebranch, all have their own Git branch with whatever Helm, OpenTOFU, etc.

With this pattern, and other patterns, we have a principle "if it's in the Git branch, we intend for it to there, if it's not in the Git branch it can't be there".

We use Helm to apply and remove things -- and we loved version 3 when it came out -- so there's not really any way for anything to linger.


Or it will lead to more choice, I do feel overwhelmed when I go to an American grocery store and I see an entire aisle full of bags of chips, and I say to myself why? Who needs bbq flavor or even kale chips? But somebody apparently does. So even if it nows takes more time to choose, choice is giving everybody exactly what they want. Anyway I guess only time can tell and predictions are difficult to make. Just in the recent past, there were good and bad results out of forks as this article describes http://www.makeuseof.com/tag/forking-good-great-ugly/


If you are looking at rkt, if you have time, take a look at Kurma (open-source too). Kurma was built using the same specification rkt was built from. Here is the getting started guide: http://kurma.io/documentation/kurmad-quick-start/ What I like of kurma is that I can simply run docker images from the hub, no need to download, convert, etc.

Disclaimer: I am an Apcera employee and Kurma is an open-source project sponsored by Apcera


Hi mtanski,

How did you replace docker with rkt? Do you have an howto that you can share?


I haven't replaced Docker with rkt on a big scale (or ran Docker on a big scale), but I recently changed over some Docker containers to rkt.

First off, this and the rest of the rkt docs is a good starting point https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.htm...

Second, rkt runs Docker images without modifications, so you can swap over really easily https://coreos.com/rkt/docs/latest/running-docker-images.htm...

rkt uses acbuild (which is part of the application container specification, see https://github.com/appc/spec) to build images, and I had a very tiny Docker image just running a single Go process.

I just created a shell script that ran the required acbuild commands to get as similar image.

A good place to get started is the getting started guide https://coreos.com/rkt/docs/latest/getting-started-guide.htm...

Docker runs as a daemon, and rkt doesn't (which is one of the benefits). I just start my rkt container using systemd, so I have a systemd file with 'ExecStart=/usr/bin/rkt run myimage:1.23.4', but you can start the containers with whatever you want.

It's also possible to use rkt with Kubernetes, but I have not tried that yet. http://kubernetes.io/docs/getting-started-guides/rkt/


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: