Hacker News new | past | comments | ask | show | jobs | submit login

No, that's the only thing to understand about it! You can't trust anything it says about anything. What it does is emit plausible-sounding information. But we've seen -- I think across every single realm of anything -- that this information includes complete nonsense, whether due to it being trained on people just shitposting bullshit, or due to combining tokens (words, sentence fragments, whatever) in new ways without the benefit of any actual understanding of it.

It can be useful, but it's at best as useful as a smart dog that's figured out how to operate a voice synthesizer, and is also on amphetamines or hallucinogens. I'm glad to see this term "LLM hallucination", as that's how it's felt to me.

I find ChatGPT really useful for things that I can double-check instantaneously (or nearly so), such as doc comments for the code I just wrote, or small unit tests that match the comment I just wrote.

But in my experience, it's worse than useless for writing real code, or any complex endeavor that isn't instantly verifiable/rejectable at a glance. Because vetting plausible looking code -- including dependency specification -- is almost always more taxing than just writing the code or package.json entries yourself.




> No, that's the only thing to understand about it! You can't trust anything it says about anything. What it does is emit plausible-sounding information. But we've seen -- I think across every single realm of anything -- that this information includes complete nonsense

And that is in my opinion the best argument why we won't be "replaced" by AI at work. This is consistent among all the tasks, NLP, vision, RL - they are all stumbling very quickly without assistance. How do you go from 0 to 1?

AI will make for a nice assistant, really nice actually, but not a human replacement. Of course in the far future there will eventually come a day when AI might work without hand holding on some tasks, but we have no idea when. 15 years of SDC development attest to it, the closer you get to the goal, the most hidden hurdles you discover.


You’re missing the reality of how automation “replaces” people. It doesn’t do one person’s job 100%, it does the easiest 90% of everyone’s job. You won’t be replaced by an AI, you and 8 other people will be replaced by an AI plus one guy who has good prompt-fu and editing/fact checking skills.


Yeah but not programmer jobs. If you could pay your engineers 2x to be 3x more productive, you would. But you can't.

But if you could pay some automaton 0.1x to make them 2x more productive, you obviously would. 2x is a stretch today but the price of applied-statstics-model add-on is still way way less than 0.1x. So your surely would augment them that way as (I think?) most companies are now doing.

So I think you're generally right but "AI" isn't remotely close to being able to do 90% of a fresh frontend bootcamp grad's job. It's literally more like 1%.


> you and 8 other people will be replaced by an AI plus one guy who has good prompt-fu and editing/fact checking skills.

So the maximum processing speed of the AI will be limited to the reading speed of the prompt guy, brilliant.

If every AI task requires human in the loop, all you get is a small boost. Only when it can handle everything alone it can scale 100x or 1000x.


> Only when it can handle everything alone it can scale 100x or 1000x.

There'll always be a human somewhere at some level of the loop, at least until we make ourselves entirely obsolete.

If AI can handle 99% of the workload, that's a 100x improvement. If it can handle 99.9% of the workload, that's a 1000x improvement.


Absolutely. I mean someday, whenever that is, who knows. But this time around, and the whole LLM architecture behind it... definitely won't replace competent programmers, except at the extremes.

The test I apply in my own mind is this: If it could be done 10x better, but at the same cost, would it be worth it?

There's so much software architecture and implementation of software-intensive systems to be done that to me the answer is clearly yes for those jobs. But the same holds true for a soldier, or a surgeon.

OTOH will applied statistics models replace writers? Like of daily news? Sure. But only if they were already writing undifferentiated crap. Because you're already producing shit, nobody cares about the quality, so yep.

But should humans even be doing those jobs? I'm reminded of a (very surprising!) argument I had with a girlfriend in 2004. I was working on robots and I was explaining to her how robots could be trained to pick up litter and dog shit in industrial parks and university campuses, etc, so no human would ever again have to do those jobs.

She was incensed: What about the old guys who just need an easy job?! What are they gonna do?!

I have to admit I was stumped for an answer. Because in the incompletely-visualized world I was looking forward to, not picking up dog shit every day was an unqualified win... but what? Did everybody have UBI or housing or something? I hadn't really thought it through.

But anyway, the programming jobs this generation of applied statistics LLM automatons are going to take away are the picking up dog shit jobs. Which Copilot already does for me daily, and I really appreciate it and am more productive for it.

For $20/month or whatever it is a great deal. But for even $500 it would be laughable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: