The journal in which it appeared is a predatory journal with a good PR department. I hope this is the last nail in their coffin. If you see something published in a "Frontiers In" it's most likely rubbish.
Only god knows what kind of peer review the article went through (if any), in the most optimistic case it was reviewed indeed by peers of the people who submitted an obvious AI generated article, probably they sre already thinking of submitting their own.
Not only that.
The JAK "pathway" is complete gibberish. The third figure is a pool of random spheres in "differentiation" steps.
The whole Frontiers publishing group is widely seen in academia as worthless, however, since most promotion committees just count the number of publications no matter where they may be, they continue to publish anything and make bank.
This has not been my experience with any of the promotion committees I've been on.
But Frontiers has been off my list since Frontiers in Public Health published a number of AIDS denialism papers. And yeah, every figure in this paper is nonsensical garbage.
The text was ok, I do not know anything about spermatogenesis but a fair bit about the JAK/STAT pathway. The abstract was pure LLM, however.
It has been retracted.
I saw the picture get posted on social media earlier today, and I took one look at it and thought "this looks like something from the Cursed AI Facebook group I'm in" before I read the post accompanying it and saw them saying it was published in a peer-reviewed journal.
I wouldn't be surprised. I've seen people say that instead of reading articles now, they just open up ChatGPT and say "Summarize the important parts of the following article for me".
My experience is that ChatGPT is laughably wrong when it comes to generating any kind of factual information. At best, it's good for feeding ridiculous prompts into it and trying to see what the most cursed thing you can get to spit out is so you can have a laugh.
My experience is querying an LLM usually does ok if the only knowledge it needs is directly included in the prompt (including for summarization). It’s not perfect but its reliability is within the realm of human error and so isn’t significantly worse than me skimming the text myself.
But it’s really unreliable if it’s leveraging “knowledge” from its training. Asking what something in an article means, for example, is pretty risky. Open-ended questions that cause knowledge dumps are very risky—the longer the answer, the riskier. Even text translation is risky, even though it’s ostensibly a simple transformation, because it has to draw the word mappings from training. And god help you if you ask it a leading question without realizing it. It’ll often just parrot your assumption back to you.
To your point, though, using an LLM as an oracle/teacher would be a lot less dangerous if it were always laughably wrong but it’s usually not. It’s usually 90% right, and that’s what makes it so insidious. Picking out the 10% that isn’t is really hard, especially if you don’t know enough to second-guess it. The wrong stuff usually looks plausible and it’s surrounded by enough valid info that what little you do know seems to corroborate the rest.
Not just peer review, Frontiers claims "out editorial office conducts a pre-screening for validation of research integrity and quality standards" before that point. There's also supposed to be a "final validation" phase where they go over it again. It shouldn't have made it to peer review and shouldn't have made it out of it.
Who is "they"? The "researchers" wouldn't want to be known as frauds, they were probably hoping this would slip by quietly so they could list it as a publication and no one would ever read it. The reviewers are probably being harassed with inboxes full of AI rat phalluses right now. The journal wouldn't want to be made famous for publishing it. Frontiers has enough reputation issues without the most viewed paper of the year being this, and they don't get paid by readership, so they'll likely try to clean things up, missing out on the article processing fees from other Midjourney users.
While there is a lot to be said about that, that's a different affair. This is just Frontiers doing exactly what you'd expect it to do. It's a hot garbage, predatory journal with a low impact factor (low trust, parasites).
I wonder if this paper will end up getting a lot more as the "rat dck paper" in articles about misuse of AI than it would as an honest publication on the same topic.
Haha this is great. Somewhere out there some job requires peer-reviewed papers and this journal provides that presumably for a fee. A classic example of meeting the incentives.
To me, this is the real question. One of the purposes of the peer-review is to validate and verify results, which was clearly not done to a great extent here. Perhaps the reviewers were also using some type of AI?
One of the things that worry me the most about AI is that it enables some of the most sloppiest and laziest operators to become even more sloppy and lazy.
That thing isn't even "good enough". I mean it literally has label "dck" on it. Not even "dick", "dck". It's hilariously bad for anyone bothering to look for like 5 seconds, and yet it was supposedly peer-reviewed by multiple serious scientists, and absolutely hilariously awful diagrams were just fine. I don't think it raises to the level of "mediocre", it's more like "anything goes as long at it looks pretty from afar".
Not that hard when they have a processor in that knows the date or an Internet connection. I'm not suggesting anyone has done that yet. But only a matter of time, surely?
Great, I can't wait for appliances that stuxnet themselves shortly after their warranty period.
New home automation hack: have your pi-hole pull double duty by running a local, pirate NTP server that forever makes the appliances think they're only a year old.
I imagine its quite difficult to get a washing machine to die as close as possible to the end of the warranty, but not before it. Just one week must take some real skill.
Let's not forget that OpenAI has probably scraped that POS paper and added it to ChatGPT. Someone should ask it what "iollotte sserotgomar" is.
Also Note on the Mission Statement:
"Frontiers follows the guidelines and best practice recommendations published by the Committee on Publication Ethics (COPE). Authors should refer to the author guidelines for full details.
Frontiers in Cell and Developmental Biology is member of the Committee on Publication Ethics."
Maybe COPE better get its ethics guidelines updated for AI.
I wonder what happens when the majority of the Internet is bullshit generated by LLMs (which can't be long, surely), which is then used to train the latest LLMs. Will we pass an LLM bullshit horizon where the entire Internet becomes "iollotte sserotgomar" and it's brethren?
I guess that's the "singularity" we've been told about - when the amount of bullshit is so massive that it completely overwhelms any attempts at making sense of anything online or find anything.
It is actually Ars Technica that should be ashamed of this article. There is nothing at all newsworthy about a predatory journal publishing garbage, there are soooooo many of these. The fact that the journal says it's "peer-reviewed" is meaningless - they can say that and mean whatever they want it to mean. There are loads and loads of BS predatory journals that are "peer reviewed" but will print anything on their web journals. They charge authors page charges and make money off scammy folks who want to list "publications" on their own websites for credibility points.
There is no way real scientists are "aghast" - they are just rolling their eyes.
Source: my partner is a scientist and was the managing editor for a legit journal for seven years. She gets hilarious predatory journal spam ALL THE TIME.
Page charges are normal for academic journals, so unfortunately it's a scam that's not likely to go away any time soon.
Only god knows what kind of peer review the article went through (if any), in the most optimistic case it was reviewed indeed by peers of the people who submitted an obvious AI generated article, probably they sre already thinking of submitting their own.