Hacker News new | past | comments | ask | show | jobs | submit login
Scientists aghast at AI rat with genitals in peer-reviewed article (arstechnica.com)
75 points by kristianp 9 months ago | hide | past | favorite | 66 comments



The journal in which it appeared is a predatory journal with a good PR department. I hope this is the last nail in their coffin. If you see something published in a "Frontiers In" it's most likely rubbish.

Only god knows what kind of peer review the article went through (if any), in the most optimistic case it was reviewed indeed by peers of the people who submitted an obvious AI generated article, probably they sre already thinking of submitting their own.


Frontiers In, more like Frontier Sin?


Not only that. The JAK "pathway" is complete gibberish. The third figure is a pool of random spheres in "differentiation" steps. The whole Frontiers publishing group is widely seen in academia as worthless, however, since most promotion committees just count the number of publications no matter where they may be, they continue to publish anything and make bank.


This has not been my experience with any of the promotion committees I've been on.

But Frontiers has been off my list since Frontiers in Public Health published a number of AIDS denialism papers. And yeah, every figure in this paper is nonsensical garbage.


Is the text garbage as well? I don't have any knowledge of the subject area.


It's not my field, but it doesn't feel like garbage.

Interestingly, the authors do disclose they used Midjourney.


The text was ok, I do not know anything about spermatogenesis but a fair bit about the JAK/STAT pathway. The abstract was pure LLM, however. It has been retracted.


I saw the picture get posted on social media earlier today, and I took one look at it and thought "this looks like something from the Cursed AI Facebook group I'm in" before I read the post accompanying it and saw them saying it was published in a peer-reviewed journal.

How could this possibly pass peer review?


> How could this possibly pass peer review?

they've probably outsourced that to generative AI too


I wouldn't be surprised. I've seen people say that instead of reading articles now, they just open up ChatGPT and say "Summarize the important parts of the following article for me".

My experience is that ChatGPT is laughably wrong when it comes to generating any kind of factual information. At best, it's good for feeding ridiculous prompts into it and trying to see what the most cursed thing you can get to spit out is so you can have a laugh.


My experience is querying an LLM usually does ok if the only knowledge it needs is directly included in the prompt (including for summarization). It’s not perfect but its reliability is within the realm of human error and so isn’t significantly worse than me skimming the text myself.

But it’s really unreliable if it’s leveraging “knowledge” from its training. Asking what something in an article means, for example, is pretty risky. Open-ended questions that cause knowledge dumps are very risky—the longer the answer, the riskier. Even text translation is risky, even though it’s ostensibly a simple transformation, because it has to draw the word mappings from training. And god help you if you ask it a leading question without realizing it. It’ll often just parrot your assumption back to you.

To your point, though, using an LLM as an oracle/teacher would be a lot less dangerous if it were always laughably wrong but it’s usually not. It’s usually 90% right, and that’s what makes it so insidious. Picking out the 10% that isn’t is really hard, especially if you don’t know enough to second-guess it. The wrong stuff usually looks plausible and it’s surrounded by enough valid info that what little you do know seems to corroborate the rest.


The reviewers' names are listed on the publisher's (Frontier) site [0, top right]

[0] https://www.frontiersin.org/articles/10.3389/fcell.2023.1339....


Would be interested to hear their side of it.


>How could this possibly pass peer review?

Not just peer review, Frontiers claims "out editorial office conducts a pre-screening for validation of research integrity and quality standards" before that point. There's also supposed to be a "final validation" phase where they go over it again. It shouldn't have made it to peer review and shouldn't have made it out of it.


Crassness breeds clicks, and that's all they want.


Who is "they"? The "researchers" wouldn't want to be known as frauds, they were probably hoping this would slip by quietly so they could list it as a publication and no one would ever read it. The reviewers are probably being harassed with inboxes full of AI rat phalluses right now. The journal wouldn't want to be made famous for publishing it. Frontiers has enough reputation issues without the most viewed paper of the year being this, and they don't get paid by readership, so they'll likely try to clean things up, missing out on the article processing fees from other Midjourney users.


Scientific publishing is a scam. So it is hardly surprising that one of the biggest scientific publishing houses was run by a crook (Robert Maxwell). https://www.theguardian.com/science/2017/jun/27/profitable-b...


While there is a lot to be said about that, that's a different affair. This is just Frontiers doing exactly what you'd expect it to do. It's a hot garbage, predatory journal with a low impact factor (low trust, parasites).


Precisely. And for some reason I'm downvoted for saying it too. Lol

These sorts of predatory journals are in the business of publishing anything for page charges .... they don't do real peer review!


I like the comment beneath the article:

"I'm mentally tormented by the fact that the rat is casting his gaze upwards, to heights not even revealed in the image."


Probably up to the pecker's tip. Geometrically correct if not anatomically so.


Perhaps the author(s) were reprising Alan Sokal's seminal work?: https://en.wikipedia.org/wiki/Sokal_affair


If so, I salute them. Exposing sloppiness and malpractise is a noble pursuit. I had my own bit of fun and 15 minutes of fame with software 'awards': https://successfulsoftware.net/2007/08/16/the-software-award...


Whether hoax or laziness, you have to pretty ballsy to submit that article with your name on it.


Assuming those are real people's names.



To the AIs credit it did successfully label the rat as "rat". Shame about everything else, but small victories.



That second image reminds me of https://en.wikipedia.org/wiki/Turbo_encabulator


I wonder who paid the US$ 3,295 submission fee for this garbage.


Simple solution: fire the editor.

Bit harder: into the sun.

Seriously, did anyone even look at it before it was added?


I wonder if this paper will end up getting a lot more as the "rat dck paper" in articles about misuse of AI than it would as an honest publication on the same topic.


Haha this is great. Somewhere out there some job requires peer-reviewed papers and this journal provides that presumably for a fee. A classic example of meeting the incentives.


The AI figured out how to cross a rat with a tanuki...


Will the people that did the peer reviews be sanctioned in any way? Seems so blatant that it doesn't look like it was actually reviewed at all.


To me, this is the real question. One of the purposes of the peer-review is to validate and verify results, which was clearly not done to a great extent here. Perhaps the reviewers were also using some type of AI?


The reviewers are cited as part of the publication:

Binsila B. Krishnan, National Institute of Animal Nutrition and Physiology (ICAR), India

Jingbo Dai, Northwestern Medicine, United States


From a quick Google, they appear to be real people.


Frontiers has an explicit rule against that, although I doubt any reviewer would admit to it.


Perhaps the reviewer was an AI. Which gives a new spin to 'peer review'.


makes sense, because the peer of AI can only be AI


perhaps it is all true, and we are all living in end times?


“Where do you buy this animal? Asking for a friend”


One of the things that worry me the most about AI is that it enables some of the most sloppiest and laziest operators to become even more sloppy and lazy.


The future is probably mediocre. Things that are "good enough" replace real craftsmanship.


Except where it is needed the most. I doubt you’ll get a fusion reactor running with sloppy craftsmanship.

I install heavy machinery for a living. The devices I install simply won’t work unless I do it near perfect.


Simpson's predicted that


That thing isn't even "good enough". I mean it literally has label "dck" on it. Not even "dick", "dck". It's hilariously bad for anyone bothering to look for like 5 seconds, and yet it was supposedly peer-reviewed by multiple serious scientists, and absolutely hilariously awful diagrams were just fine. I don't think it raises to the level of "mediocre", it's more like "anything goes as long at it looks pretty from afar".


I disagree. AI makes us all better. AI makes the fastest possible draft faster. AI makes the best possible quality output better.

The problem is that some people go for volume over quality.

How do we name and shame those people?


Hence the washing machine that dies 1 week after the warranty expires. Yay for capitalism!


I suppose manufacturing washing machines that reliably give up the ghost months within warranty expiry is its own kind of impressive.


Not that hard when they have a processor in that knows the date or an Internet connection. I'm not suggesting anyone has done that yet. But only a matter of time, surely?


Great, I can't wait for appliances that stuxnet themselves shortly after their warranty period.

New home automation hack: have your pi-hole pull double duty by running a local, pirate NTP server that forever makes the appliances think they're only a year old.


GE washing machines did exactly that.


have you got a link for that?


I imagine its quite difficult to get a washing machine to die as close as possible to the end of the warranty, but not before it. Just one week must take some real skill.


Precision enshittification. You'll soon be able to do a masters degree in it.


> "Precision enshittification. You'll soon be able to do a masters degree in it."

Only until someone trains an "AI" model to do it automatically... ;~)


Let's not forget that OpenAI has probably scraped that POS paper and added it to ChatGPT. Someone should ask it what "iollotte sserotgomar" is.

Also Note on the Mission Statement:

"Frontiers follows the guidelines and best practice recommendations published by the Committee on Publication Ethics (COPE). Authors should refer to the author guidelines for full details.

Frontiers in Cell and Developmental Biology is member of the Committee on Publication Ethics."

Maybe COPE better get its ethics guidelines updated for AI.


I wonder what happens when the majority of the Internet is bullshit generated by LLMs (which can't be long, surely), which is then used to train the latest LLMs. Will we pass an LLM bullshit horizon where the entire Internet becomes "iollotte sserotgomar" and it's brethren?


I guess we always feared ai would train on itself and become super smart, turns out the opposite happens


IIRC, the quality takes a dive when an AI is uncritically trained on its own output.


I guess that's the "singularity" we've been told about - when the amount of bullshit is so massive that it completely overwhelms any attempts at making sense of anything online or find anything.


Yes, singularlity is a good word to use. I wrote up my thoughts here: https://successfulsoftware.net/2024/02/18/the-ai-bullshit-si...

HN thread: https://news.ycombinator.com/item?id=39422528


This is deliberately adversarial, not sloppy


It is actually Ars Technica that should be ashamed of this article. There is nothing at all newsworthy about a predatory journal publishing garbage, there are soooooo many of these. The fact that the journal says it's "peer-reviewed" is meaningless - they can say that and mean whatever they want it to mean. There are loads and loads of BS predatory journals that are "peer reviewed" but will print anything on their web journals. They charge authors page charges and make money off scammy folks who want to list "publications" on their own websites for credibility points.

There is no way real scientists are "aghast" - they are just rolling their eyes.

Source: my partner is a scientist and was the managing editor for a legit journal for seven years. She gets hilarious predatory journal spam ALL THE TIME.

Page charges are normal for academic journals, so unfortunately it's a scam that's not likely to go away any time soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: