Hacker News new | past | comments | ask | show | jobs | submit login

> People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?

I think it's a deal-breaker for many people inside tech either :-)

It looks like it may be useful in doing grunt work in the field where you are an expert and can check and correct anything that it produces.

Where people expect it to be useful though is in providing them with information they do not already know; and the fact that you cannot trust anything it says makes it unusable for this case.




Yep. CoPilot for example is great for making grunt code, filling in the blanks. But it's really crappy at anything that requires reasoning through a problem.

It is our responsibility as tech professionals to recognize this and explain it to laymen otherwise we're in trouble.

I've said it before, and I'll say it again: It's really really bad that these systems are made to speak in first person, that they're often given "names", that they use human voices, and that they present authority. This is irresponsible engineering from a social and ethics POV, and our "profession" such as it is, should be held to task for it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: