Hacker News new | past | comments | ask | show | jobs | submit login

I find them very helpful, personally.



Understandable, they have been trained to convince you of their helpfulness.


If they convinced me of their helpfulness, and their output is actually helpful in solving my problems.. well, if it walks like a duck and quacks like a duck, and all that.


if it walks like a duck and it quacks like a duck, then it lacks strong typing.


"Appears helpful" and "is helpful" are two very different properties, as it turns out.


Sometimes, but that's an edge case that doesn't seem to impact the productivity boosts from LLMs


It doesn't until it does. Productivity isn't the only or even the most important metric, at least in software dev.


Can you be more specific with like examples or something?


This is true, but part of that convincing is actually providing at least some amount of response that is helpful and moving you forward.

I have to use coding as an example, because that's 95% of my use cases. I type in a general statement of the problem I'm having and within seconds, I get back a response that speaks my language and provides me with some information to ingest.

Now, I don't know for sure if everything sentence I read in the response is correct, but let's say that 75% of what I read aligns with what I currently know to be true. If I were to ask a real expert, I'd possibly understand or already know 75% of what they're telling me, as well, with the other 25% still to be understood and thus trusting the expert.

But either with AI or a real expert, for coding at least, that 25% will be easily testable. I go and implement and see if it passes my test. If it does, great. If not, at least I have tried something and gotten farther down the road in my problem solving.

Since AI generally does that for me, I am convinced of their helpfulness because it moves me along.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: