Yeah, so I'm a 'hacker' (MIT definition here). I've only taken a single class in javascript with Sun based workstations about 20 years ago now (god, I'm old). I hated it.
All my work now is in python and SQL now and though I've watched a lot of youtube videos and plunked at StackOverFlow for ~15 years, I've never had formal education in either language. Like, it takes me about as long to set up the libraries and dependancies in python as it does to write my code. My formal job titles have never had 'programmer' in them.
My code, as such, is just to get something done. Mostly this is hardware interfacing stuff, but a little software too. I've had my code get passed up the chain and incorporated into codebases, but that's only happened a handful of times. The vast majority of the code I write is never seen by others. My code hasn't had to be maintainable as I've never really had to maintain it for more than 5 years max. I've used git on projects before, but I don't really see the need these days. The largest program I've written is ~1M lines of code. Most code is about 100 lines long now. I almost always know what 'working' means, in that I know the output that I want to see (again, mostly working in hardware). I almost never run tests of the code.
I've had the same issues you have had with LLMs, where they get stuck and I have to try to restart the process. Usually this happens to me in about 20 back and forths. I'm mostly just pasting relevant code snippets and the errors back into the LLM for a while until things work for me. Again, I know what 'working' looks like from the start.
Typically, I'll start the session with an LLM by telling it the problem I have, what I want the code skeleton to look like, and then what I want the output to look like. Then it will give the psuedo code, then I walk it through each portion of the psuedo code. Then I get to errors and debugging. Usually about half of this is just libraries and versions in python. Then, I get to the errors of the code itself. I can typically find what line of code is causing the error just from the terminal outputs. I'll talk with the LLM about that line of code, trying to understand it from the error. Then, on to the next error. Repeat that process until I get the working output I desire. I'm never expecting the right code out of the first LLM interaction, but I am expecting (and seeing) that the time it takes to get to working code is faster.
The time it would usually take me to get through all this before LLMs was about 2 weeks of work (~80 hours) per project. Now it takes me about half a day (~4 hours), and it's getting faster.
I'm not in the camp of thinking that AI is going to take my job. I am in the camp of thinking that AI is going to finally let me burn down the list of things that we really need to do around here.
All my work now is in python and SQL now and though I've watched a lot of youtube videos and plunked at StackOverFlow for ~15 years, I've never had formal education in either language. Like, it takes me about as long to set up the libraries and dependancies in python as it does to write my code. My formal job titles have never had 'programmer' in them.
My code, as such, is just to get something done. Mostly this is hardware interfacing stuff, but a little software too. I've had my code get passed up the chain and incorporated into codebases, but that's only happened a handful of times. The vast majority of the code I write is never seen by others. My code hasn't had to be maintainable as I've never really had to maintain it for more than 5 years max. I've used git on projects before, but I don't really see the need these days. The largest program I've written is ~1M lines of code. Most code is about 100 lines long now. I almost always know what 'working' means, in that I know the output that I want to see (again, mostly working in hardware). I almost never run tests of the code.
I've had the same issues you have had with LLMs, where they get stuck and I have to try to restart the process. Usually this happens to me in about 20 back and forths. I'm mostly just pasting relevant code snippets and the errors back into the LLM for a while until things work for me. Again, I know what 'working' looks like from the start.
Typically, I'll start the session with an LLM by telling it the problem I have, what I want the code skeleton to look like, and then what I want the output to look like. Then it will give the psuedo code, then I walk it through each portion of the psuedo code. Then I get to errors and debugging. Usually about half of this is just libraries and versions in python. Then, I get to the errors of the code itself. I can typically find what line of code is causing the error just from the terminal outputs. I'll talk with the LLM about that line of code, trying to understand it from the error. Then, on to the next error. Repeat that process until I get the working output I desire. I'm never expecting the right code out of the first LLM interaction, but I am expecting (and seeing) that the time it takes to get to working code is faster.
The time it would usually take me to get through all this before LLMs was about 2 weeks of work (~80 hours) per project. Now it takes me about half a day (~4 hours), and it's getting faster.
I'm not in the camp of thinking that AI is going to take my job. I am in the camp of thinking that AI is going to finally let me burn down the list of things that we really need to do around here.
Thank you for the reply!