Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google CEO Pushes 'Vibe Coding' – But Real Developers Know It's Not Magic (interviewquery.com)
14 points by birdculture 5 days ago | hide | past | favorite | 21 comments




Abolish this stupid, non-descriptive term.

It's very problematic for companies, mainly because of the tooling. Large companies are equating lovable, replit, bold, V0 with Claude Code, Codex, etc all under the "Vibe Coding" banner.

I try to fit the former under the banner of "Prompt-to-app Tools" and the latter as "Autonomous AI Engineering"


Better for sure.

The ascendancy of non-descriptive, jargon for everything is irritating as hell. If something is supposed to mean "AI-generated code" then it needs to contain at least the important word from that description. Sad that this has to be explained now.


i'm a real developer and it is absolutely magic! if someone showed me a demo of talking to a computer which works for one hour to find and fix my bugs all on its own five years ago I'd definitely call it magic and I still call it magic.

does it make a mistakes? yes sometimes but you can verify with tests or with lean.


I experience(d) the same level of magic when I throw keywords into google and get relevant results back. LLMs are just an extension of this kind of magic.

Their efficacy opens up a lot more possibility, but given they’re not AGI (without getting into a definitions debate) a lot of the magic is gone. Nothing fundamentally changed. I still use them a lot and they’re great, but it’s not a new paradigm (which I would then call magic).

I think the key point here too is LLMs demo like magic. You see the happy path and you think we have AGI. You show me of 10 years ago the happy path and I’d be floored until I talked to me of now and got the whole story.


Agreed. Imagine if a developer was in a comma for 3 years, wakes up, and sees a state of the art coding agent for the first time.

They'll be impressed for a few months and then start to notice where it fails.

It's not perfect. When you hire a new software dev, you'll quickly notice where he/she fails too.

That seems like a very shortsighted thing to do. Does he not understand the limitations of LLM generated code?

He understands how to hype and increase the value of his shares. Everything else is insignificant

It's all about wrecking the economic futures of millions for a handful of billionaires to become trillionaires. Anyone who aids them is complicit in digging their own grave and graves for their friends and colleagues. Actions have moral and ethical consequences.

I am not a professional software developer but instead more of multi-domain system architect and I have to say it is absolutely magical!

The public discourse about LLM assisted coding is often driven by front end developers or rather non-professionals trying to build web apps, but the value it brings to prototyping system concepts across hardware/software domains can hardly be understated.

Instead of trying to find suitable simulation environments and trying to couple them, I can simply whip up a gui based tool to play around with whatever signal chain/optimization problem/control I want to investigate. Usually I would have to find/hire people to do this, but using LLMs I can iterate ideas at a crazy cadence.

Later, implementation does of course require proper engineering.

That said, it is often confusing how different models are hyped. As mentioned, there is an overt focus on front end design etc. For the work I am doing, I found Claude 4.5 (both models) to be absolutely unchallenged. Gemini 3 Pro is also getting there, but long term agentic capability still needs to catch up. GPT 5.1/codex is excellent for brainstorming in the UX, but I found it too unresponsive and intransparent as a code assistant. It does not even matter if it can solve bugs other llms cannot find, because you should not put yourself into a situation where you don't understand the system you are building.


Agreed, I know Networks, requests, protocols, Auth flow etc but is been years since I actually coded stuff

This is magical to me.

I love cursor, I use it to deploy docker packages and fix npm issues etc too :p

I use some guardrails, like SonarQube as static code analyzer and of course some default linters. Checks and balances


[flagged]


> Bad developers complain about LLMs,

I'm finding the bad developers just do vibe coding because it is a shortcut. You look like an expert but you are not.

I coded for many years before I actually did formal computer science training. At that point I understood how bad my coding actually was. It worked, but was never scalable, never maintainable and used poor design patterns.

For vibe coders they can't see that and assume the AI will.

Real developers can see the flaws, but if you start vibe coding without those skills then you are lost.


  I'm finding the bad developers just do vibe coding because it is a shortcut. You look like an expert but you are not.
Ultimately, it's about business value not amazing hand written code. Let the business profit judge whether it's good or bad to use generated code.

Further more, people always talk about LLM generated code but don't talk about LLM assisted testing, bug finding, automated documentation, etc. It can help increase code quality and reliability too - not just velocity. It's up to good software leaders in the company to set guidelines. Companies who can't do this also couldn't write good code before LLMs anyway.


> Let the business profit judge whether it's good or bad to use generated code

You’re getting paid to make this call. If you think the business is going to judge you you’ve got a massive misunderstanding of what you’re paid for - or work in a role where what you do doesn’t really matter. Which leads me to my second point:

Many jobs are fake and the business output of them does not matter. You don’t get useful signals here and LLMs will be a paradigm shift to people with these jobs, because you can automate these jobs (because what they produce doesn’t matter).


> Ultimately, it's about business value not amazing hand written code.

You are missing my point entirely. Even if vibe coding could create perfect code, your junior prompter with little to no coding knowledge is not going to know the difference.

Creating something and not knowing how it works is worse than something created that doesn't work.


[flagged]


In smaller companies, sure.

But one of the big problems with our industry today (not unique to tech—it's a problem across all sectors) is massive consolidation. For a company like Google or Amazon, do you really think a gradual but significant increase in bugs is going to cost them profits? What's the mechanism? "Voting with your wallet" only works when there's a viable alternative, and these tech behemoths have been busily destroying all competition for decades with the blessing of the Chicago School antitrust "enforcers".

"The market will optimize for the best companies" only works when there's a free market. And that's not free market as opposed to a command economy: that's an idealized free market, where there's perfect competition, perfect information, and perfect liquidity. In other words, it only really works as a thought experiment.


Problem is that the "good" developers have built up a lot of experience prior to LLMs and therefore can distinguish good from bad code and know the limitations. But if everyone is supposed to use and embrace AI, how will new developers build up this intuition?

This intuition comes with experience, LLMs or not. Because it’s not intuition, it’s being being able to understand the problem and the solution as code better than the LLMs. The problem is will businesses justify 5+ years of training so that inexperienced developers can reach that point? Few companies will be able to afford it.

And if someone only learns coding from LLMs do you truly believe that they understand the code they are working with? I highly doubt that.

And the experience comes from actually stepping through a debugger, from doing research and not the next best hallucination that some LLM cooked up. They can't even properly construct some CLI arguments and just make up flags that aren't even in the manpage.

Companies are short sighted then. I'd rather build up a good engineer and spent time on it than just sitting them down in front of a LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: