> Every VP thinks their idea for a feature will revolutionize the company.
Now imagine that everyone of them is given a tool that could get them an POC quickly. I think a lot VPs are about to figure out that their ideas are shit.
I long for the day when someone can give advice based on their own personal experience without someone else being like “well that won’t work for literally everyone”
What's the _point_ of the anecdote, though? You're taking up everybody's time to tell a story, do us a favor to have a relevant point.
"Have no fear" doesn't apply to the article, at all. You might as well write "what I learned was to not stick legos up my nostril". Also good advice. Also not applicable.
It's fine if it doesn't work for everyone, it's annoying if it isn't relevant to anyone.
It's obnoxious behavior. For example, I decided when I was young to live in my car and be homeless. I saved a bunch of money, and I've been frugal most my life. I was also super focused at my work and climbed the ladder making real money.
I believe most people don't have discipline to endure less than and the discipline to really listen to what power asks of them. There is a lot of great advice for people to do well in a job, but they just... don't apply it.
I was watching a trial the other day and the prosecutor asks "And did you often see your nephews at your mothers house when you video called her?", and the defendant, a dentists, says "Yep, watching TV, brushing their teeth.[5 second silence] Don't forget to brush your teeth. Really important." The prosecutor smiles, laughs, and says "A little dull humor never hurt, eh?"
I'm not sure your average adult would find "don't be afraid" to be "advice", or some deeply meaningful advice that only a cynic would think was anything less than excellent.
you're talking about specifically using genetic programming to create new programs as opposed to gradient decend in LLMs to minimize a loss function, right?
How would you construct a genetic algorithm to produce natural language like LLMs do?
Forgive me if i'm misunderstanding, but in programming we have "tokens" which are minimal meaningful bits of code.
For natural languages it's harder. "Words" are not super meaningful on their own, i don't think. (at least not as much as a token) so how would you break down natural language for a genetic algorithm?
> how would you break down natural language for a genetic algorithm?
The entire point is that you do not bother trying. From an information theory and computational perspective, raw UTF-8 bytes can work just as well as "tokens".
The program that is being evolved is expected to develop whatever strategy is best suited to providing the desired input/output transformation. Back to the bitter lesson on this one.
One might say blindly following hype is silly and cope too.
I’ve seen no indication that relying entirely on AI can produce quality software.
It can produce prototype quality code, just as it has since gpt-3.5. Advantages of technology is never considered. Security concerns are often missed. And, from what I’ve seen, the codebases are bloated.
For your avg crud app, much of that doesn’t matter. It starts mattering when you start having real business constraints, like server budgets or data compliance. If you don’t see that, then you don’t have enough real world experience yet. That’s all.
Remember how crypto was going to change everything? Or the metaverse?
We live in a period of extreme technological hype backed by insane company valuations.
Don’t get too fooled by market.
These tools are useful. They are here to stay. And they do not replace the entire field of programming nor the work that programmers do.
Either you're not relying as much on the AI as you think you are, or you're not really sure what "production quality" means.
It seems like you should know, so I'm going to bet that you're not entirely letting the AI drive.
Having the AI draft some code which you refine is a fine workflow. I didn't think it was before, but i've come around on that. I think it's also nice to have an LLM do a onceover to point out areas where I may have missed catching an error (like with JSON.parse in javascript or something.)
It's just not my cup of tea, personally. I've found that I'm faster writing code myself and treating an LLM as an assistant or rubber duck, but to each their own.
I'm referring to wholly AI generated code with no human input besides a prompt or "vibe coding." You literally can't put enough context into a prompt to have it write the exact code you'd need in every case. Your prompt would end up just being code at that point.
That's the whole point of writing code. Precise and exact instructions for a machine. You're not going to get that by adding a statistical natural language layer in the mix.
If you're using the right models (Claude 3.7 Sonnet, Gemini 2.5 Pro) and are good at prompting it very possibly can write and deploy thousands of lines of code to production without you needing to change a single thing.
Of course, odds are there in fact is something you need to change - maybe a poor design choice or a bug or missing logic. So you of course do need to always thoroughly review it. But reading 1000 lines is faster than coming up with 1000 lines you plan to write and writing them. And also, if you see a missing thing, you can just do a follow-up prompt in the same chat context rather than actually typing a single thing into the text editor.
I know it can feel alien, and I definitely still spend a lot of time manually writing and editing code, but I'm trying to outsource more and more to the model and trying to put myself into a mindset of "first try to see if I can accomplish all this with prompts, and then fallback to 'raw coding' if it fails after a few tries" for everything and I find it's speeding me up a lot.
You should try to give it another shot. Could maybe wait another year first for the editors and models to get even better than they are right now.
>I'm referring to wholly AI generated code with no human input besides a prompt or "vibe coding." You literally can't put enough context into a prompt to have it write the exact code you'd need in every case. Your prompt would end up just being code at that point.
True, but... you can do that! It may or may not be faster than writing the code you want, true, but sometimes I think it will be faster/simpler. Gemini 2.5 now (or soon?) supports a 2 million token context window. You can write a very precise spec in the prompt. Use formal language, or use a little DSL you invent on the spot, or say "it should do X and Y and account for Z and also try to cover other things if you realize there are more", etc. There's a lot you can do.
There absolutely will still be many scenarios where it's faster overall to just write the code or where it really is harder to express what you want to say in English vs. in code, but those scenarios may be less common for you than you currently think or expect.
> deploy thousands of lines of code to production without you needing to change a single thing.
I’m not saying this is impossible. I’m saying it leads to poor quality products. Deploying thousands of lines of code isn’t necessarily a good thing. Often it’s not.
> You can write a very precise spec in the prompt. Use formal language, or use a little DSL you invent on the spot, or say "it should do X and Y and account for Z and also try to cover other things if you realize there are more", etc. There's a lot you can do.
At this point, why use an LLM at all? Why introduce a black box? We can perfectly and tractably convert formal languages into machine code.
Things are never simpler when black boxes are involved…
These tools, again, are undoubtedly useful and sometimes (albeit inconsistently) magic.
But they’re not a silver bullet for making software.
I tried vibe coding literally yesterday, as I do every week or so. I used avvante.vim and code companion. I tried with gemma3 and Claude.
It’s slow, boring,and I (someone with ADHD) lose all focus when the llm starts running.
The output is prototype quality always. It looks okay and mostly works correctly (granted I usually just make a todo list or a job board) but is obviously over complicated and bloated.
If you don’t care about quality or long term maintenance (like with a prototype or POC) then it’s fine.
The code I have AI generate matches the production quality code I've shipped throughout my career. SOLID code, no security flaws, unit-tested, documented, commented, fast, secure, composable vs inheritable, no magic strings, etc etc etc.
>You literally can't put enough context into a prompt to have it write the exact code you'd need in every case.
You’d want the A and B to be intentional, not automatically generated. Every VP thinks their idea for a feature will revolutionize the company.