As a fellow k8s enthusiast who's spent too many hours in debugging hell, my co-founder and I built something we wish we'd had: Sentinal.
The problem: Debugging distributed systems is time-consuming. You have to manually inspect countless Kubernetes resources and piece everything together.
Our solution, an AI agent that:
- Deploys with a simple helm chart
- Explores your cluster only when you ask questions
- Agents do not emit data if you are not using it
- Runs relevant kubectl commands for you (with your approval)
With default read only RBAC policies applied
- Actually understands your system's context
- Using Github repositories that only you allow it to view
- Attempts to run read-only commands to interrogate assumptions - HTTP requests to services
Points you in the right direction with specific insights
- Exposes a simple web interface that shows you what the Agent is thinking and what it thinks it should do
Shows the reasoning as to why the agent wants to do what it thinks is best
All commands are human approved, we don't think agents should do whatever it 'thinks' is applicable
Does not save any information about your cluster, just queries what it thinks is relevant
It's non-intrusive - it doesn't deploy anything new to your cluster beyond its own pod. It just helps you make sense of what's already there and how things talk to each other. It does not emit any unwanted data to our servers and simply focuses the entire workflow on how a Software engineer would debug issues, by querying relevant entities and validating assumptions
We are both technical and engineers that have worked in the software industry for nearly a decade and have not found a tool that genuinely helps discover issues when systems interact with each other (eg. My service is running and shows a healthy status but my database is not being populated with orders). This isn't about showing you what you already know - it's about saving you time and frustration and trying different approaches in order to debug hard to solve errors.
If you're interested, drop a comment below. I'd love to start a discussion about debugging pain points and how we can make this more useful for all of us.
We hope that we open up an honest discussion about how LLM's should enrich developer experience and not be a nuisance for developers. We welcome comments, good or bad about this application of LLMs and Kubernetes.
We know that AI is a hot topic right now and is not always what we hope for it to be but we strongly believe there is a lot of value in it when applied correctly. It may not solve everything but if it helps you deliver projects faster and maintain what you have built with less sleepless nights and frustration, then we consider that a win.
As a fellow k8s enthusiast who's spent too many hours in debugging hell, my co-founder and I built something we wish we'd had: Sentinal.
The problem: Debugging distributed systems is time-consuming. You have to manually inspect countless Kubernetes resources and piece everything together.
Our solution, an AI agent that:
- Deploys with a simple helm chart
- Explores your cluster only when you ask questions
- Agents do not emit data if you are not using it
- Runs relevant kubectl commands for you (with your approval)
With default read only RBAC policies applied
- Actually understands your system's context
- Using Github repositories that only you allow it to view
- Attempts to run read-only commands to interrogate assumptions - HTTP requests to services
Points you in the right direction with specific insights
- Exposes a simple web interface that shows you what the Agent is thinking and what it thinks it should do
Shows the reasoning as to why the agent wants to do what it thinks is best
All commands are human approved, we don't think agents should do whatever it 'thinks' is applicable
Does not save any information about your cluster, just queries what it thinks is relevant
It's non-intrusive - it doesn't deploy anything new to your cluster beyond its own pod. It just helps you make sense of what's already there and how things talk to each other. It does not emit any unwanted data to our servers and simply focuses the entire workflow on how a Software engineer would debug issues, by querying relevant entities and validating assumptions
We are both technical and engineers that have worked in the software industry for nearly a decade and have not found a tool that genuinely helps discover issues when systems interact with each other (eg. My service is running and shows a healthy status but my database is not being populated with orders). This isn't about showing you what you already know - it's about saving you time and frustration and trying different approaches in order to debug hard to solve errors.
If you're interested, drop a comment below. I'd love to start a discussion about debugging pain points and how we can make this more useful for all of us.
We hope that we open up an honest discussion about how LLM's should enrich developer experience and not be a nuisance for developers. We welcome comments, good or bad about this application of LLMs and Kubernetes.
We know that AI is a hot topic right now and is not always what we hope for it to be but we strongly believe there is a lot of value in it when applied correctly. It may not solve everything but if it helps you deliver projects faster and maintain what you have built with less sleepless nights and frustration, then we consider that a win.