Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you're trying to project and are seeing more than what there really is.

For 99,9% of the people who have an issue with such things it's not about "oh, it's hiding very well as a human and could take over, I distrust AI more due to it".

It's about the uncanny valley problem of our brain being pattern matcher, and having difficulties classifying the AI using these new features as the "human pattern" or the "AI pattern" and so we don't like it / feel weirded out by it. Give it a few years and suddenly it has become part of the AI pattern and all is well.

I'm not saying distrust toward AI is growing or not growing, but that it doesn't even play at all in the public reaction to such AI features.



I hope I'm not projecting a distrust of simulacrum. I distrust other humans, not our tools. But I'm not oblivious to my bubble and people outside it that have no idea how these tools work or how others might use them to their direct/indirect disadvantage. People create stories to explain their losses and sometimes blame tools.

I understand the uncanny valley as a product of a pattern matching exercise that humans developed well before we encountered AI. A part of our "friend or foe", "like me or unlike me" mechanism. If a user considers the tool a "potential foe" (for instance, a customer service resolution bot) every shallow attempt by the tool to approximate "friend" could further alienating the user. I don't think the "weird feeling" just goes away at that point.

I'm also pretty sure that we're going to be in the uncanny valley for long enough to make use of that "weird feeling" for effect outside of fiction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: