So this is something for my fellow hackers.
I run a successful tech business and I also consult others who want to start their own businesses.
Now this is a problem I'm seeing other tech CEOs start doing which is use AI to have the code written for them by either using some sort of editor plugin or by using something like Cursor.
The problem is that the code that is generated is usually humongous. There is just huge amounts of types and indirection and functions, calling functions and doing all sorts of nonsense which can be done manually, much simpler with fewer lines of code.
This is creating huge amounts of AI-generated slop.
Now when I come in to consult some of these tech architectures it takes me a really long time to figure out how to improve things because there is so much of indirection in the code base and the more troubling thing is that there are errors hidden in the architecture which earlier I would easily find out, but now it requires me to go through every single line of code.
In some of the worst cases, there have been bugs which have taken down their system and then they come and blame me for not being able to find it.
This is the same problem that I had with Java shops. A lot of Java programmers immediately start using a lot of classes and objects because they had for a very long time superior tooling and IDEs.
My theory is that Java is actually a pretty reasonable good language but because there is such easy tooling and easy autocomplete you can almost always increase the huge web of classes and objects because you can immediately reach for them by pressing a dot.
Now take that and increase it like 10x with all of this AI generated code. The medium is what influences what the code generated or written is.
So how are you all handling this problem? Do you find this to be a big problem in the code bases that you see?
It's also hard to tell them not to use AI because the code does work. I would say even most of the times the code does work.
But it's just written in the worst possible manner and maintaining it long term is going to be so much harder if instead they had just handwritten the code.
If you're working on a Java project, consider prompting the AI to first write a "pseudocode solution" in a more concise/low boilerplate/"highly expressive" language — Ruby, for example — and then asking it to translate its own "pseudocode" into Java.
(Mind you, I'm not sure if you can modify the implicit system prompt used by automatic coding-assistance systems like Cursor. [Can you? Anyone know?] I'm more just thinking about how you'd do this if you were treating a ChatGPT-like chatbot as if it were StackOverflow — which is personally where I find most of the value in LLM-assisted coding.)
Alternately, consider prompting the AI to "act like a senior $lang engineer forced to write in Java for this project, who will attempt to retain their existing opinionated coding style they learned from decades of experience in $lang, in the Java code they write" — where $lang is either, once again, a more expressive language; or, more interestingly, a language with a community that skews away from junior engineers (i.e. a language that is rarely anyone's first programming language) and toward high-quality, well-engineered systems code rather than slop CRUD code. For example, Rust, or Erlang/Elixir.
(Funny enough, this is the exact same mental through-line that would lead a company to wanting to hire people with knowledge of these specific languages.)
reply