ChatGPT also has areas where it can recognize the truth. I agree that some people can hedge their confidence in some areas, but it's not a universal trait that everyone exhibits all the time. I think this shows we're only sometimes generally intelligent.
The real difference is in scale. Automations can be coordinated to produce self-affirming bullshit at a scale that drives real discussion out of view. You already see this on twitter with troll farms and primitive bots. Now it will be a tidal wave
ChatGPT will provide correct answers to a lot of questions, especially ones where you'd expect to find mostly correct answers in the first few Google results.