Exactly this. Like is the research actually useful and correct is what matters. Also if it is accurate, instead of schadenfreude shouldn't that elicit extreme applause? It's feeling a bit like a click-bait rage-fantasy fueled by Pangram, capitalizing on this idea that AI promotes plagiarism / replaces jobs and now the creators of AI are oh-too human... and somehow this AI-detection product is above it all.
LOL. So basically the correct sequence of events is:
1. The scientist does the work, putting their own biases and shortcomings into it
2. The reviewer runs AI, generating something that looks plausibly like review of the work but represents the view of a sociopath without integrity, morals, logic, or any consequences for making shit up instead of finding out.
3. The scientist works to determine how much of the review was AI, then acts as the true reviewer for their own work.
Don't kid yourself, all those steps have AI heavily involved in them.
And that's not necessarily a bad thing. If I set up RAG correctly, then tell the AI to generate K samples, then spend time to pick out the best one, that's still significant human input, and likely very good output too. It's just invisible what the human did.
And as models get better, the necessary K will become smaller....
That’s on you. You get to decide what “best” means when picking among the K, so you only get bs if you want bs.
I occasionally get people telling me AI is unreliable, and I tell them the same thing: the tech is nearly infinitely flexible (computing over the space of ideas!), so that says a lot more about how they’re using it.