Hacker Newsnew | past | comments | ask | show | jobs | submit | enraged_camel's commentslogin

Yeah, this article is full of misinformation. The water argument is only one example.

I started using Thariq’s approach yesterday and it works very well. One thing I noticed is that I’m no longer wary of reading long and complex spec documents. Opus does a great job with web design and uses a combination of clean, modern styling and interactive elements that reduces my cognitive load and improves my ability to understand the details of what it is planning to build.

I care about solving problems for and delivering value to my users. The software is simply a means to that end. It needs to work well, but that does not mean every line of code requires an artisanal touch and high attention to detail.

I think there's some ambiguity in the discussion around what people mean when they say "good code".

Good code for a business is robust code, that's functionally correct, efficient where it needs to be and does not cost too much.

I believe most developers who care about good code are trying to articulate this, they care about a strong system that delivers well, which comes from good architecture.

LLMs actually deliver pretty well on the more trivial code cleanlines stuff, or can be made to pretty trivially with linters, so I don't think devs working with it should be worried about that aspect.

What is changing fast is that last point I mentioned, "that doesn't cost too much" because if you can get 70% of the requirements for 10% of the perceived up front cost, that calculus has changed. But you are not going to be getting the same level of system architecture for that time/cost ratio. That can bite you later, as it does often enough with human coders too.


I think the other aspect to this which you allude to at the end is that all of these arguments start with the assumption that all human software engineers produce high quality code that meets the requirements, but obviously that’s very much not the case in the real world. After all, 80-90% of drivers rate themselves as above average.

If one compares a single competent software engineer directing a number of agents against a random group of engineers (not necessarily working at FAANG or a YC startup), then those quality arguments are going to be significantly less compelling.


But the trick is that if / when you can define "good code" in a deterministic manner, then the LLM can also deliver "good code".

But if it's just based on feels, then of course it can't do it because it's not a mind reading machine.


Once you’ve done the work to deterministically define your system; you’re not vibe coding anymore. You’re officially an engineer who cares about the consistency and robustness of your product, not just its superficial outcomes.

So it's Vibe Engineering then? :)

If it is "vapor" then how are the various mega corps able to use the preview release? Do you think they are all in on some giant conspiracy?

Or they are all riding the hype train and pumping AI just as much as always, lest we forget when Sam did this with GPT-5

Sure, unnecessary widespread investor fraud right before an IPO. Really smart.

Why fraud? Just hold the release while you hype it up. Give early access to people with incentive to see you succeed and incentive to continue maintaining early access.

Sounds like America, when you have their president continuously do pump and dump schemes.

Ah yes, the good old No True Scotsman fallacy, where we smugly proclaim that anyone who thinks AI is good now must be working on unimportant and easy things.

Ah yes, the (please copy & paste this message into an LLM to get it to generate a novel response in these parentheses)

My comment from 3 years ago still something I agree with https://news.ycombinator.com/item?id=34925145

Actually, it's some of the same people. I won't name names, but there are a lot of AI skeptics on this site who loudly and prominently comment on every AI story. And if you look at their posting histories you'll see the exact type of goalpost-shifting the parent commenter is talking about.

You see it elsewhere as well. There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content. At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.


There are 8 billion people on the planet. You can find a seemingly large group of people who believe anything. That doesn't mean the group exists in a way that's worth talking about.

> There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content

I can't believe that Ed Zitron, who I just looked up, has made a career out of creating and selling anti-AI content. He's 40. He cannot have been doing that for very long.

> At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.

Calling the truth "complaining" seems more revealing of you than them. If the AI was lying constantly, they weren't "complaining". They were telling the truth. Once the AI stopped lying so much, they stopped saying that as keeping on saying it would no longer be true. But there are still other issues to talk about. That's...right? Isn't it?


I would say we are way past unacceptable.

It's because software engineering, which deals with bits, evolves dramatically faster than other engineering disciplines, which deal with the physical world.

>> Defining exactly what the product is supposed to do is the hard part, writing code is the easy part.

There is a massive difference between a spec, which defines what the product should do, and code, which defines exactly how it should do it. Moving from the former to the latter is not "the easy part". Anyone who genuinely believes that either works on easy and straightforward problems, or is some sort of programming god. Because translating specs to code can still be difficult and exhausting.


> Defining exactly what the product is supposed to do is the hard part, writing code is the easy part.

> There is a massive difference between a spec, which defines what the product should do, and code, which defines exactly how it should do it.

He states: The difficult part is figuring out the details so LLM doesn't save much time. You state: If LLM is able to correctly assume the details that saves you a lot of time.

Case 1: Part of the spec describe some basic feature based on a popular framework and industry standards, everything is trivial. You are right, he is wrong.

Case 2: Part of the spec describe some niche feature and/or uses some not popular framework and/or require deviation from industry standards and/or cutting edge performance/latency requirements and/or uses a bunch of proprietary non-googlable data. You are wrong, he is right.

The more senior engineer are the less time they spend on case 1, those are easy, they don't spend much time on it, it is the 2nd which is much more time consuming.


>> I even got a warning on my OpenAI account.

I was using GPT 5.5 through Cursor recently, and it found what it thought to be a security-related issue. I read the code, didn't see what it was seeing, and said "Run the chain of operations against my local server and provide proof of the exploit."

It thought for a few seconds, then I got a message in the chat window UI saying OpenAI flagged the request as unsafe, and suggested I use a "safer prompt."

Definitely soured me on the model. Whatever guardrails they are putting are too hamfisted and stupid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: