Hacker News new | past | comments | ask | show | jobs | submit login

This is impacting online discourse too. It used to be that when someone is wildly wrong it was relatively easy to identify why: ideology, common urban myth, outdated research, whatever.

Now? I’ve seen people argue positions that are demonstrably wildly wrong in unusually creative and often subtle ways and there’s no way to figure out where they went off the rails. Since the LLM is responsive, they can use it to come up plausibly sounding nonsense to answer any criticisms collapsing the debate into a black hole of bullshit.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: