Hacker News new | past | comments | ask | show | jobs | submit login
Sys Admins and AI
17 points by flaflapoms 24 days ago | hide | past | favorite | 12 comments
Hi, I am interested in how system administrators use generative AI in their day-to-day work. Do you generate code? Do you use it for debugging purposes? Rephrasing emails or getting descriptions for your jiras or Git? Do you use it instead of google? Which tools do you use?



I don't. I'm paid to know my job and if I don't I read a book, or the man page for that matter.

You write configuration to learn the application. See what makes it work, and what makes it fail.

You press random buttons and then rebuild from scratch when it breaks to learn mistakes.


Surely any senior sys admin or devops type will have accumulated such a breadth of knowledge across so many topics that they cannot keep everything in their head to be retrieved at any moment - if you are one of those, congrats.

This seems unnecessarily luddite though, AI can definitely speed up some of this (such as writing boilerplate configs or debugging dumb issues like mis-indented yaml)


I personally keep cheat sheets within Excel. Each tab refers to each system. I can keep which sysctl's have been configured. Parameters used for daemons and I pretty much carry it with me. Network stats, io..

And your not wrong, I am happy to embrace AI. I'm not denying it's purpose, a future tech as the likes of cloud for hosting services but if errors are what your using AI/ML for then the fixes are not going to help you from not making them.

I enjoy problem solving and boilerplates allow me to waste the mornings sipping coffee and lazing around till the afternoon.

AI/ML to me is more a teacher, where if there is a specific syntax, procedure, function which I don't fully understand from documentation I can have it explain it to me in an analogically fashion.


Small-scale SysAdmin here, couple hundred VMs on ~15 physical machines plus AWS, Linux, 30+ years under my belt. I'm using the AI tools a lot, I've tried many of them but my go-tos are Perplexity and ChatGPT. Here are some samplings of what I use it for:

I use Perplexity for ~90% of what I used to hit google for. Perplexity will answer your question but it also gives you links to pages for further reference, which makes it useful to see additional context. Questions like "How do I import indexes from an old Elastic cluster into a new one".

I use it a lot as documentation replacement: we collaborated on a logstash regex using non-capturing look-behind (not something I keep at my fingertips), "how do I use imagemagick to scale an image to 1024x768", "how do I use argparse to do X". Where before I'd end up reading through documentation, the genAI stuff can basically write me a custom "Stack Overflow-like" answer for my question, but honestly the AIs tend to halucinate less than Stack Overflow does.

I have passable luck having it explain error messages to me. That's maybe 50/50.

As a primarily Linux guy, working a little with Windows, it's a huge help, particularly with Powershell automation. I barely know any Powershell, and I use Windows little enough that it doesn't tend to "stick", and isn't worth doing a deep dive into, so usually I can collaborate with the AIs to get some scripting done.

I don't do a lot of programming on a daily basis, but when I do I use AIs a lot. I've had some success with it spitting out code, for example I wanted a small CLI tool like "sleep" but I wanted it to sleep until 8:30pm, and show a count-down TUI, and ChatGPT spit out the code for it using the "rich" TUI from a couple sentences and was basically correct.

When I am programming, I'm using the Cursor IDE a fair bit. This is mostly Python, which seems to be a sweet spot for the genAIs, Python is a first class citizen with them.

edit: Forgot to mention, I built a git+jira integration TUI, using ChatGPT, which uses ChatGPT to help craft my commit messages, *but* I haven't really found that to be as useful as I was hoping it would be. https://github.com/linsomniac/lazrgit


I've tried to use some self hosted stuff and ChatGPT like a detached copilot.

I want to avoid reliance on the things; they can be useful to get a second 'opinion'... or lies.

Trying to get consistent information out of an AI for oddly named/poorly documented software can be an art.

I don't find a whole lot of benefit... but I totally recognize that is at least partly due to keeping it distant from my workflows

I'm generally well taken care of by conventional manuals or local references like "ansible-doc". My editor deals with formatting.

I don't see a lot of room for generative stuff with SysAd. Let's be honest, it's generally glorified janitorial stuff.

Train it on my tickets and maybe it'll catch up... maybe it'll take some creative liberties with your core infrastructure. For what little there is to gain, there's a lot to lose

Besides, we already have stuff like SIEM; the machines have been involved.

It has helped me with boilerplate code, but Ansible is barely above that as-is, I can type it. The words representing what I want are hardly any simpler

I'd be silly to say no benefit could be had, but I'll let people sort that out before I get excited


All of the above. Yes, it's largely replacing Google. It's been a force multiplier. It still requires vigilance on my part, and not blind trust.


It will write boilerplate stuff a lot for me. "write me a function to do (simple thing i can't remember) in python 3" types of things. I use enterprise bing that's integrated with copilot like how I used to use google. Occasionally, I'll run into a strange issue and present the symptoms to copilot and ask it if it can come up with a list of potential causes for me, and occasionally it'll spark an idea for me. Usually these types of questions will get a list of "sanity" checks that are useful no matter what in case you forgot one.

It's more of a companion to bounce ideas off of and do menial work than something that actually does thinking for me.


In areas where there's a decent amount of separation between the developers that made some application and the people responsible for running it, I can see a lot of use for AI. Especially for initial triage.

For larger-scale apps with lots of moving parts (ie microservices) simply speeding up attribution of issues before getting to actual diagnosis could be a major optimizer.


on my local system, https://github.com/KillianLucas/open-interpreter/ when I don't want to do things by hand. I just ask it to do things, like go deal with my vim config or fix my python and it'll go off and do whatever. it's not for anybody who doesn't trust LLMs not to errantly generate an rm -rf though. (there's a confirm step, but then alert fatigue kicks in, so you disable it...)


sometimes I write instructions using it, they become more structured, I hate writing instructions


You ask ChatGPT for example this:

show me config for Bareos server with bareos-dir and bareos-fd roles on FreeBSD system and bareos-sd on Linux (as this will be needed to use storage backend to put/get backups from S3 buckets)

... and after it replies you tailor the answer to your needs like:

- show me exact configs for everything

- we will use latest FreeBSD 14-STABLE with PkgBase instead

- we will use free and open source Bareos repo instead of paid one

Etc. Just one of thousands examples.


Start by telling us what do you use it for?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: