Hacker Newsnew | past | comments | ask | show | jobs | submit | aspectrr's commentslogin

I like the GLM coding plan before they raised their prices, now their rate limits are more strict as they are compute constrained. It is still a good deal for 1/3 the price of Claude for the same quality.

[Lily](https://github.com/aspectrr/lily) A CLI tool that can be installed to any coding agent via hook that gives read-only access to production systems (wraps ssh, kubectl, awscli, gcloud, az) so agents can investigate issues in production. Built it for myself and my team during initial investigations to save use a lot of time on figuring out issues but didn't want to have to babysit agents or just hope that "telling them they are in production" would prevent issues.

[clue.ssh](https://github.com/aspectrr/clue.ssh) A clue game over SSH based on the AI wave, where the goal is to find who stole the H100. Pretty fun and coding agents can play too.

[Chasing Losses](https://github.com/aspectrr/chasing_losses) I was interested in if LLMs chased losses when playing roulette, still investigating this but i've found that different models will bet different amounts at different frequencies even when prompted the same. Struggling on not wanting to guide them too much but also wanting to see how they react when put under pressure.


I'm surprised sys-admin hires are down, is AI doing a lot of that as well?

Yah, that comment is odd.

Sysadmins, Devops engineers will the be the last ones replaced by AI. The context window for their problems are huge.

Unless you define Sysadmins and Devops as fiddling with YAML all day, which might be the case here.


>Sysadmins, Devops engineers will the be the last ones replaced by AI.

Most setups aren't properly documented which makes the discovery and exploitability part the major bottleneck when this is facilitated by AI, the sysadmin/devops team is downsized.


Yeah this isn't even the worst thing I've seen an agent do, one time I (foolishly) ran Claude Code on my server directly and it managed to completely bring down my entire elasticsearch cluster. never again. its why I built Lily: https://github.com/aspectrr/lily

Hey HN, I have seen many different ways of letting AI run bash commands on remote hosts but none of which fix the issues of: a. safety (read-only) b. not installing anything on the remote host

so this is my implementation of one that does.

It uses seven layers of verification on the client and reconstructs the commands with safe quoting to prevent unsafe chars or other attack vectors. Check out: https://github.com/aspectrr/lily?tab=security-ov-file

Looking forward to your thoughts!


Hey, just finished this,https://github.com/aspectrr/lily, but it's a simple way for agents to safely access hosts.

It's a tool I want for agents to help with debugging and something i've seen many other attempts at that don't have the right security model. Often installing a binary on the host which is non-starter in pretty much any serious company.


Hey HN, I have seen many different ways of letting AI run bash commands on remote hosts but none of which fix the issues of:

a. safety (read-only) b. not installing anything on the remote host

so this is my implementation of one that does.

It uses seven layers of verification on the client and reconstructs the commands with safe quoting to prevent unsafe chars or other attack vectors. Check out: https://github.com/aspectrr/lily?tab=security-ov-file

Looking forward to your thoughts!


Hi HN,

My name is Collin, I've been working on automating my job and open-sourcing the results. I work as an ELK engineer and don't like so i started building this on my own time to find out if this was something that could be handled by agents and found success! The coolest part of which is built with sandboxes that have data stubs (kafka, s3, api) so the agent can model data pipelines in a full feedback loop without touching a cluster. Because of this I am working on an Elasticsearch consultancy comprised of me and a swarm of these agents working to build client projects.

Let me know if you have any questions!


In the land of infrastructure, servers are sacred. Humans are barely allowed to SSH into these servers, and LLMs are not even in the picture. This is for good reason, one misspelled command and production is down. This is the reality that I saw working in infrastructure. However, I believe that the jump that Claude Code gave software engineers will happen to sys-admins, platform engineers, and dev ops people alike. I wanted to let LLMs onto these servers, and let them do my boring debugging work, safely. So that's what I built with Fluid.

A safe, auditable way to let LLMs debug and manage Linux Servers. Redact secrets, IP addresses, and keys from LLM inputs, have custom allowlists without completely hindering the LLMs performance, and audit logs. And once you are ready, give the LLMs sandboxes of your Linux Servers, allowing them to fix issues all on their own, safely.

Give it a shot and lmk what you think!


In the land of infrastructure, servers are sacred. Humans are barely allowed to ssh into these servers, and LLMs are not even in the picture.

This is for good reason, one misspelled command and production is down. This is the reality that I saw working in infrastructure. However, I believe that the jump that Claude Code gave software engineers will happen to sys-admins, platform engineers, and dev ops people alike. I wanted to let LLMs onto these servers, and let them do my boring debugging work, safely. So that's what I built with Fluid.

A safe, auditable way to let LLMs debug and manage Linux Servers. Redact secrets, IP addresses, and keys from LLM inputs, have custom allowlists without completely hindering the LLMs performance, and audit logs. And once you are ready, give the LLMs sandboxes of your Linux Servers, allowing them to fix issues all on their own, safely.

Give it a shot and lmk what you think!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: