Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What tools would make your Kubernetes development experience better?
24 points by KickW on May 24, 2023 | hide | past | favorite | 60 comments
My team and I are ideating open-source dev tool ideas and would love some input from the community!

We want to know: what kind of dev tools would make your Kubernetes development experience better? It can be a tool to simplify deployments, streamline cluster management, enhance scalability, or something completely innovative. All ideas are welcome. TIA!




I understand that this is contrary to your question, but I might suggest a different approach to your inquiry.

Instead of asking people for their dev tool ideas, ask them what problems they have with Kubernetes that aren't yet solved well for them. With that information you can iterate on dev tool ideas that could potentially solve those problems. In my experience people understand their problems better than the potential solutions to those problems, and right now you're asking people for their solutions.


In addition to this, I've found that many people answer the question "what tools would you like?" but if you actually build those tools most people who answered can't be bothered to try them.


This. A good reference on how to ask these types of product questions is the mom test by Rob Fitzpatrick.

https://www.momtestbook.com/


Agreed! I highly recommend The Mom Test for anyone doing product work.


That's a great point! I really appreciate it.


K9s - https://k9scli.io/ CLI UI for k8s. Saves me a ton of typing while checking k8s pods/logs.


Originally I used it as a quick shim to not have to keep typing out every kubectl command individually. Now k9s is a daily tool and everyone on the team uses it. I love it.


One thing I just hit is that there doesn't seem to be any way to ask from a Pod to get a TLS certificate signed by the cluster itself (that would only be valid inside the cluster, of course) that is valid for the IP(s) and DNS names associated by Kubernetes itself with the pod (and to automatically rotate etc such certs).

Since the infrastructure already knows (and controls) this information, it would be the ideal place to actually generate a signed attestation that a pod is who it says it is.


We’ve been using Linkerd as our service mesh which does this without really much effort at all required on our part. They bind the certs to the ServiceAccount identity of each pod, which is apparently more secure than doing it via IP.

https://linkerd.io/2.13/features/automatic-mtls/


Unfortunately this would probably not work for my purposes - the issue on my side is that I want to enable MinIO to use server-side encryption; however, MinIO doesn't support SSE unless TLS is enabled inside MinIO itself - so it's ultimately MinIO that needs these certificates, it doesn't help that the Linkerd proxy is actually encrypting the traffic if MinIO doesn't know about it.

There may be a way to tell MinIO to use the the proxy certificates though, since those may be somewhere in the pod FS if I understand the docs correctly, I can try to investigate that, thanks for the link!


Couldn’t do this with cert-manager or maybe one of the service meshes?


Reading through their docs, it seems cert-manager should support this, but unfortunately it wasn't available in our cluster. I'll see if there is any reason we couldn't add it.


I have yet to find a good solution that accomplishes both these goals.

  1. Have a central location for our helm charts so that we have one copy of our charts with separate values for our various environments.
  2. Have tight controls around who is allowed push what where (allow devs to push to the dev environment, allow team leads push to QA, etc.)
Separately each goal is easy to accomplish, but if you want both, it seems to break the GitOps paradigm. Multiple branches or repos means you now have helm templates all over the place and quickly you get drift. Having one branch and repo consolidates your templates, but now you need to either allow any dev to change any value for any environment, or you have to slowdown your devs and gatekeep the repo and only those who are allowed to make prod changes are responsible for merging in ALL changes to every environment.

Ultimately it seems like the best compromise is you end up with multiple deployments of Argo/Rancher (whatever CD tool you have), which monitor your helm charts pointing at least two separate repos, one that is for non-prod, and another that is prod.


To avoid drift, I would keep the charts in one repo, but separate the values files in different repos per environment.

IIRC, Argo CD now allows to get your values.yaml from a different repo than the one you're using for the chart. So you create a very restricted repo for the charts, and then one repo por every environment that only (or mostly) contains a values.yaml.

You deploy then X applications or application sets, all pointing to the same chart, but pulling the values from different repos.


YAML hell.

It's very difficult to have standard, company (or org) wide "templates" for creating resources. And I don't mean literal templates.

In my ideal world, everything would be defined within Starlark. And we could build our own abstractions. Do we want every resource to be associated with a team or cost-center label? Cool - this is now a required argument for the make_service() function.

Even with Kustomize, there's a lot of context switching needed to understand fully what resources a simple service is comprised of. And depending on the team, you don't need to know all of them. But because they are files crafted by hand — or at best, initially generated by a common template and modified by hand later — it's extra stuff to pay attention to.



My advice from personal experience in the ideation phase is to first build out an ideal customer profile, then solicit advice from those people.

What tools I need as a jr dev with no docker experience vs a dev ops admin is going to be vastly different.

As a software engineer who has tried to build a business with their own app as a solo dev using Docker/Helm/Kubernetes to host in DigitalOcean I found all the tooling super complex and time consuming to learn.

Frankly my "job to be done" was to publish an app to a remote server and if it crashed automatically restart. Hiring someone to set this up didn't help much because once they set it up, I needed to run it on my own and it was a lot to pick up in addition to the other 4-5 technologies I wanted to utilize.

I honestly don't care about serverless and I don't want to learn yet another technology, I just wanted to publish my damn app, focus on new features and try to make some money. This was easier with a VM, but you have to share that with another app that might not play nice and take the whole server down. Then the site is down and you won't know or can't restart it until you get off work.

If you can build a tool where I can take an app and push to a server with https enabled, without being an expert in kubernetes or helm or whatever. That'd be awesome.


Thank you for taking the time to provide valuable feedback. I really appreciate your insights!


Something to go into the cluster, find things that are bad/broken/wrong, give me a button that says "click here to fix this", allows rolling back the change. Something to put in policies, roles, etc for guardrails for common problems, like don't allow a user to create an external load-balancer. Easier RBAC. Easier network policies. I would have said an operator for external secrets, but it exists now (https://external-secrets.io/v0.8.2/introduction/overview/).

Most of what people need in k8s-land is not one small tool, but a complex intelligent solution that (today) requires a well-trained human to do analysis and then figure out some action to take. The automation that should be there by default but isn't. So you could start by asking people about problematic or time-consuming situations they face in k8s, and automate that.


Something that would facilitate load testing of microservice containers for optimizing the resource requests and limits.


I am a k8 newb, so maybe this is a solved problem or not a k8s problem, but I wish there was better tools for hosting ML models in stg and prd.

Prd has high traffic during the day; low traffic at night. Pods must be manually right-sized based on the model and expected traffic. Each model gets its own pod. 20 models = 20 pods = 20 cpu and ram configurations per env.

Stg has low traffic (50 queries per day). Hosting a container per pod is expensive. Stg hosting per pod costs ~20% of the production cost (if prd cost $10k/mo, staging costs $2k/mo to service a 1% the traffic).

I think this can be fixed in the application layer with 2 very different configurations per env, but it would be nice if this was abstracted away from us so we could focus on tuning the models, not configuring proxies and hosting environments.


- A tool to create a Kubernetes YAML from a "docker run" command :-)

- A kubernetes YAML explainer playground where you can paste a YAML file in and it explains what the YAML does.

- Diagram generation from Kubernetes namespace(s)

I am interested in devops tooling. The space is incredibly complicated and I find other people's workflows and tooling to be confusing.

I wrote a tool that configures terraform/chef/ansible/packer/shellscript and other tools by graph/diagram files: https://devops-pipeline.com/ It's not ready for usage by other people but the idea is there.

If you could make configuring traefik, istio, service mesh, sidecars, storage easier that would be amazing. I am inclined to run Postgres outside of kubernetes.


> create a Kubernetes YAML from a "docker run" command

https://docs.podman.io/en/latest/markdown/podman-generate.1....

> kubernetes YAML explainer playground

GPT4 can probably do this just in the way you describe. The other thing that came to mind is yaml-language-server

> Diagram generation from Kubernetes namespace(s)

https://github.com/mkimuram/k8sviz


Have you looked at https://kompose.io?

I have used it to get 80/20 of the work done when converting from docker-compose to k8s.


- Actually getting CloudFoundery for K8S (korifi) production ready.

I think one of the big pitfalls teams using Kubernetes is the level of abstraction you are working with. With teams that have expertise, working with Kubernetes directly can work well … but if you are asking about developer experience, Kubernetes is the wrong level of abstraction. You really need an application platform built on top of Kubernetes — CloudFoundery is exactly that, but it has taken them a while to build something using native k8s resources.

Using native k8s resources is important. You can make an opinionated workflow, but if you know what you are doing, you can still take it apart and reconstruct it into something specific for the team that will be using it.


OpenSource version of Lens (https://github.com/MuhammedKalkan/OpenLens) would certainly help.


I've been learning k8s this past weeks and the best thing so far has been the GUI elements coming from Docker Desktop. It might not make much sense for experts and at the end of the day you do have use the terminal and get comfortable with it.

However looking at the pods and checking their env vars has helped me a bit.


Having a tool that'd make it easy to run the app locally for development and at the same time have roughly the same files used in production. Docker compose got this mostly right, compare compose with the complexity of running locally service in micro Kubernetes cluster.


It would be very useful IMO to be able to easily move a service outside of Kubernetes.

The goal is to move the "fast loop" development outside of any container, onto the native system.

If I am not mistaken, all of the tools which help this either rebuild a container, or rebuild inside a container, right?

Given a Kubernetes application with service names, port forwards, etc., I would like to have an option to automatically convert these configurations into `ExternalName`, external port forwards, etc. This would be transparent to all services, inside and outside Kubernetes.

I think it can be done manually today. My colleagues at my last job wrote a helper script; credit to them for the idea. I think it could be built into `kubectl`. i.e. `kubectl apply --externalize serviceA`


Minikube


Minikube is just one piece but it's not functionally equivalent to `docker compose run` since I'd still have to build and push docker images and edit k8s yamls manually (compose run will build containers and start them, no need to edit anything) .


tilt.dev


Looks okay but I was looking for something that wouldn't require learning yet another config language and would rather take k8s yamls that I already have (or require minimal modifications).


Check out https://www.signadot.com/. Full disclosure, I'm the founder, but it could help with what you’re looking for.


> Define Sandboxes using a simple YAML file, specifying customizations relative to the baseline environment. Maintain these YAML files in your git repository and standardize Sandboxes across your organization.

It looks like these yaml files are not k8s files that I already have?

Also, is it open source? (I couldn't find a link to source on mobile)


Yes, these are our (thin) YAML files via which you describe the Sandboxes in terms of deltas from the baseline env. The K8s yaml files remain as the source of truth for your standard deployments. The operator is not open source. Some components like the CLI and Resource plugins are. We do have a free tier, however.


Okay, thanks for the explanation!


Checkout Skaffold and kustomize as well


Should have [Ask HN] in the title, otherwise it looks like it's a link to an article.


Better templating, example https://github.com/emcfarlane/kubestar which uses starlark for kubernetes config.


The firm I work at developed an internal library (open source soon hopefully!) that does all our templating in Python, example:

  import os

  """
  Imports redacted, but this imports our library
  """

  NAME = "my_app"
  PORT = 5000
  DEPLOY_ENV = os.environ["DEPLOY_ENV"]

  deploy_all(
      create_stateful_service(
          NAME, http_port=PORT, cpu=2000, memory=500
      ),
      create_ingress(
          NAME,
          service_name=NAME,
          service_port=PORT,
          hostnames=["myapp.internal.com"],
      ),
  )

In the background, this converts a bunch of objects to YAML, leveraging the fact that objects in Python are just dictionaries. It then takes these objects and sends them to the K8S API. Really improved our DevEx since you can do complex tasks like "define a deployment, deploy it, and then check server status" because it is all just Python at the end of the day.

Edit: formatting


I wrote something similar using Ruby a while back. https://github.com/matsuri-rb/matsuri

I even have a helm plugin that would let you specify all the command line options and values so that it can be tracked in git and consistently applied

There’s also tools for transforms on raw manifests so you can track changes (in code) from upstream manifests


I was tinkering with a similar approach to this, using Python objects to generate the YAML. I'm not working in it right now, but I'd love to see other tools use this approach.


Nice, I've seen similar setups like this with Go before. Maybe that's the better way to do it, use a proper language instead of yamls for config.


It should be like that from the very beginning, any and all configuration management systems that chose YAML as base instead of DSL (even if DSL is just a subset of Python or Lua) have same problems.


https://github.com/kapicorp/kapitan is also a very powerful option for managing and generating templates.


What do you use templating for that made you not want to use Helm? Granted it's buggy as hell, but the fact that it automates so much operational process makes it save way more time over templating, in my experience.


Using Helm made me not want to use Helm! Joking, Helm is what I would use but still think there is a better way to do templating. This tool scratches that itch.


We use CUE directly to generate yaml resources

Have our eyes on Timoni

https://github.com/stefanprodan/timoni


CUE is really interesting in comparison with protobuffers with CEL. Thanks for linking the project.


you're welcome, I also maintain https://cuetorials.com


I’d love for deployment to consist of copying one file to one location and as little time spent as possible learning or understanding anything else about the tool.


I see that there are often things people ask for, but it turns out, someone had alreadt made it.

Maybe an Awesome Kubernetes is what we need


Kubernetes that automatically toasts a bagel and makes a cup of tea during compilation. Sets slack status to away just before a coworker messages me. Swipes tinder matches based on biometric data on my behalf. Sky's the limit!


If kubernetes won't, there's always a CRD that will


That’s the thing. When I think if Kubernetes, I think of the whole ecosystem, and not just the bits released by the Cloud Native Foundation.


The fact that you need a container registry drastically increases the barrier for initial adoption.


I see the downvotes. I can't stress how important this has been when talking to people trying to test the waters with k8s. microk8s goes a long way of making this as simple as possible, but having a super-simple out of the box secure (!) registry solution would simplify things a lot.


The negative experience comes entirely from bugs in tools, not lack of features


rm -rf /

Best way to deal with Kubernetes




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: