That's because it hasn't been beaten yet! As soon as a chatbot beats a turing test, there will be a lot of AI deniers come out and say that the Turing test doesn't measure 'real' intelligence, or isn't a valid test for <reasons>.
I know this is true, because there are already a lot of people that think the Turing test isn't valid. They believe it could be beaten by a stupid chatbot, or deception on the part of AI. Just search for past discussions on HN of the Turing test, it comes up a lot.
There is no universally accepted benchmark of AI. Let alone a benchmark for AI progress, which is what Go is.
No one claimed that Go would require a human level AI to beat. But I am claiming that beating it represents progress towards that goal. Whereas passing the Turing test won't happen until the very end. Beating games like Go are little milestones along the journey.
Viewed that way, I'll accept that beating Go represent progress. That's not the same as saying that it represents imminent evidence that singularity style strong AI is almost upon us, as suggested in the post I was replying to. In the long term it might turn out to represent very minimal progress towards that goal.
Chatbots ate trivially easy to beat. Just try teaching it a simple game and ask it to play it with you. Basically any questions that require it to form a mental model of something and mutate or interrogate the model state.
Many of the chatbot Turing test competitions have heavily rigged rules restricting the kinds of questions you're allowed to ask in order to give the bots a chance.