Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a specific case of the false dilemma, sure.

But, in life, when you meet enough AI evangelists, what was formally a logical fallacy becomes informally a useful, even necessary heuristic.





Perhaps; but I would argue talking to many AI evangelists is a form of selection bias. Which makes the false dichotomy conclusion reasonable given the inputs, but still inaccurate given reality.

True, it's a form of false dichotomy, but I think this specific instance is particularly interesting in that it allows the holder to dehumanise their opponent to an extent, and justify lack of discussion. It's also an incredibly common conclusion in politics after people gain a somewhat superficial understanding of both sides. I wonder if it might play a key role in social polarization.

For me the strongest arguments are the ones that can argue the opponent's side as effectively as the opponent, and then show why it's weak. And that feels entirely incompatible with a dumb-or-evil argument.


>I think this specific instance is particularly interesting in that it allows the holder to dehumanise their opponent to an extent, and justify lack of discussion.

That's a wild take and a wild leap. For my own part, I see the failure or refusal to comprehend someone else's preferences, values, or boundaries as itself a profoundly human quality, even if it's a quality I don't love, rather than one which would cause me to see someone as less human.

I will admit that, when there's enough nonsense money being thrown after a vaunted object, sensible discussion can feel pointless. Prudence goes deaf amid the din of hype.

And yes, steelmanning can be highly persuasive, but not when premises are radically different enough between two parties. It's really a more productive tool to improve your model of someone else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: