Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh, lord, spare me the corporate apologetics.

Releasing a chatbot that confidently states wrong information is bad enough on its own — we know people are easily susceptible to such things. (I mean, c'mon, we had people falling for ELIZA in the '60s!)

But to then immediately position these tools as replacements for search engines, or as study tutors, or as substitutes for professionals in mental health? These aren't "products that shipped with defects"; they are products that were intentionally shipped despite full knowledge that they were harmful in fairly obvious ways, and that's morally reprehensible.





Ad hom attacks instantly declare “not worth engaging with”.

That's a funny irony: I didn't use an ad hominem in any way, but your incorrect assertion of it makes me come to the same conclusion about you.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: