Hacker News new | past | comments | ask | show | jobs | submit login

> I don't understand what people with a higher than average IQ are getting from llms

People with higher than average IQ are smart enough to ask detailed and specific questions like “Fix the race condition in this [giant block of code]” rather than asking dumb, overly general things like “build me an app!!”.

Also, chatgpt models with internet access are completely happy to help you price compare. Go try bing chat, or ChatGPT plugins.




So it's limited to people smart enough to know there is a race condition but not smart enough to fix it also trusting enough to accept the answer (how could this group audit the fix?).

If you know enough to ask the right series of questions you already know the answer.


I’m smart enough to fix a race condition, but why would I spend a hour debugging when ChatGPT can find the issue in 10 seconds. Sure it might not be perfect, but having some initial code to fix the issue is a huge time save.

Making ChatGPT write a whole app is problematic because small issues are hidden by the large quantity of generated code. Having ChatGPT fix an issue in your own code is comparatively easy to spot check for correctness.


"but why would I spend a hour debugging"

So you won't make the same mistake again


I guess youre not really seeing the actual benefits of LLMs.

Let's say I dont "learn it". I would have to make the _same_ mistake 120 times for me to actually make it worth it to take the time to "learn" it myself.

I'd figure someone somewhat smart, will look at this error and after the 2nd or 3rd time asking GPT, will see a pattern.

Its amusing how people become so reductionists with LLMs.


If you have to go through the effort of fixing it once you will likely write code that won't produce that error and you will warn others because of the pain you felt.


8 months later when I write a block of code like that I'll likely make the same mistake. People tend to learn stuff they more frequently repeat...

Unless you're telling us you one shot learn almost everything on a strange defiance of the capabilities of the humans around you.


I agree that's why going through the muscle memory of fixing it will allow you to fix it 8 months or 8 years later


I never said it was my code responsible for the race condition. ;)


Knowing that there is an error can be as trivial as reading a failed test or a bug report something even someone grossly insufficient to the task could do.

Fixing the problem could be a 5 minute or a 5 hour tour. For some subset of the tasks that fall towards the hard end of the range validation and or adaptation of a proposed solution is liable to be faster than starting from scratch. I don't know how this could possibly be in dispute with capability rapidly increasing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: