Hacker News new | past | comments | ask | show | jobs | submit login

> Could AI be a con?

The conclusion doesn't really mesh with the body of the argument, because you can measure whether AI is useful or not. How can something be a con when it's genuinely useful to me every day? When it writes half the code at our company? Methinks the author's base rate of P(con) is too high.




> When it writes half the code at our company?

Half the code "by volume", maybe. Surely it is not doing any of the intellectual work, planning, architectural decisions, etc., right?


Sure, but I think that's only a matter of time, to be honest.

No one anticipated junior SWEs being automated away so soon - I personally didn't think I'd live to see something like this in my lifetime.


Unfortunately human programmers tend to be assessed on volume too.


Author thinking of "con" in too abstract terms, maybe. (Apollo didn't liberate Penn-Teller from their monies after all?)

Imho, the misdirection is away from the prevalent industry/societal problem of technical debt.

a fellow HN'er has observed that AIDEs are great for getting on top of things but not to the bottom of things (paraphrasing Knuth on emAIl)

Sorry for assuming that what you find useful is the help getting through that pile of make-work foisted upon your will by the powers that pay the bills :)

I find AIs great otherwise for swiftly resolving FOMO, but not so much for tending to the ikigAI


> When it writes half the code at our company

Do you have no code low quality standards in you company? No code reviews?

I sometimes use AI to generate boilerplate code that I use as a base but I end up refactoring it so heavily that rarely anything actually AI written gets committed.

Can't imagine AI code passing code review. The quality is still pretty rancid. It has problems following modern coding standards because it is trained on old data and super over complicated stuff.

I feel AI is mostly useful as a super slow search engine. Now that Google has be completely shitified, yeah it has uses but if we still had we good Google, would there be so much need for ChatGPT? When you could find relevant information in seconds and quickly copy paste code from stack overflow instead of waiting for ChatGPT to generate the same thing?


Our internal models have improved dramatically. I think people are also better at using them now. Nearly every eng I know uses them to write non-boilerplate code as well.


What system are you using to generate the code? We felt this way a year ago, we don’t feel this way now at my company.


I check in on Copilot every quarter or so to see if it’s gotten smart enough to write one of my simpler PRs for me, and as of 2 weeks ago it’s still not. Makes writing test cases a bit faster, but that was never really a bottleneck.


Copilot is pretty awful compared to the other options.

Cursor, Aide, and Amazon Q are on completely different levels than it is.


Copilot has an LSP I can add to whatever editor I need without breaking my entire existing developer workflow and tooling.

Cursor and Aide are both their own editors. Amazon Q might be able to integrate but a cursory search didn't immediately bring up a solution.

Even if the difference between Copilot and Cursor et al is GPT 3.5 to Sonnet 3.5 it wouldn't be worth the time. Even the places where Copilot feels useful and improves productivity isn't worth it with my recent experience. The improved coding speed just allows me to more quickly build shitty code that works around architectural issues. This was on a greenfield project may I add where people have praised the abilities of AI greatly. The only place I've actually found Copilot to be useful and where I miss it when I turn it off is the ability to autocomplete the currenr line reasonably well.

Even with boilerplate I have found it to make glaring mistakes. Example I had, generating the SQL for a many to many table when the PK in one table was a composite key. At first I thought, well that was useful it just generated the whole thing. Then I read the code it generated and it completely just made up fields, when the table it was referencing is 5 lines above in the same file, you cannot even make an argument about context length since the context is literally on the same screen.

I've seen much much worse examples in actual code where the generated code is close enough that unless you actually properly review it you will miss the mistakes. So now not only do I need to think about the architecture, I need to write half the code and code review the other half assuming a monkey high on LSD wrote it.

The real threat isn't that AI will steal coding jobs... The threat is that all the good developers will get so frustrated that farming goats might actually be a viable option.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: