It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.
Sometimes it feels like the advent of LLMs is hyperboosting the undoing of decades of slow societal technical literacy that wasn't even close to truly taking foot yet. Though LLMs aren't the reason; they're just the latest symptom.
For a while it felt like people were getting more comfortable with and knowledgeable about tech, but in recent years, the exact opposite has been the case.
I think the real reason is that computers and technology shifted from being a tool (which would work symbiotically with the user’s tech literacy) to an advertising and scam delivery device (where tech literacy is seen as a problem as you’d be more wise to scams and less likely to “engage”).
This is a tool that is basically vibecoded alpha software published on GitHub and uses API keys. It’s technical people taking risks on their own machines or VMs/servers using experimental software because the idea is interesting to them.
I remember when Android was new it was full of apps that were spam and malware. Then it went through a long period of maturity with a focus on security.
It very probably is, but if it's a personal project you're not planning on releasing anywhere, it doesn't matter much.
You should still be very cognizant that LLMs will currently fairly reliably implement massive security risks once a project grows beyond a certain size, though.
They can also identify and fix vulnerabilities when prompted. AI is being used heavily by security researchers for this purpose.
It’s really just a case of knowing how to use the tools. Said another way, the risk is being unaware of what the risks are. And awareness can help one get out of the bad habits that create real world issues.
If an open weights model is released that’s as capable at coding as Opus 4.5, then there’s very little reason not to offload the actual writing of code to open weight subagents running locally and stick strictly to planning with Opus 5. Could get you masses more usage out of your plan (or cut down on API costs).
“Human artisan era of code” is hilarious if you’ve worked in any corporate codebase whatsoever. I’m still not entirely sure what some of the snippets I’ve seen actually are, but I can say with determination and certainty that none of it was art.
The truth about vibe coding is that, fundamentally, it’s not much more than a fast-forward button: ff you were going to write good code by hand, you know how to guide an LLM to write good code for you. If, given infinite time, you would never have been able to achieve what you’re trying to get the LLM to do anyway, then the result is going to be a complete dumpster load.
It’s still garbage in, garbage out, as it’s always been; there’s just a lot more of it now.
> I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.
You can still do that with Claude Code. In fact, Claude Code works best the more granular your instructions get.
reply