You could also remove the incentive as to why there's fake research in the first place - judging researchers purely by how many papers they push out - but that would undermine Springer Nature's entire business model, and we can't have that, can we.
I think analysis of gel and blot images is very important for the life science fields. But in general, the "peers" in the peer review processs have to be more careful now.
AL generated text is not the problem. Researchers can use it to cheat, but also to express their own ideas in a better English.
Scientific language is 'dry' and limited, so the same phrase structure can be (and will be) used in many different papers by different authors, just with minor changes that suit the theme.
Even more, scientific articles repeat a lot of their content, just because this is how things work in science.
Most researchers in a field would describe the same method exactly in the same way, just applied to different samples. Because is the same and just providing a link can be inconvenient if your readers can't access to that article.
Articles in the same field will share also a lot of its bibliography. AI would have a really hard time understanding why including the same cite in many articles is not plagiarizing or a problem. AI would need to be able to split the text in two parts. A part where to have a high percentage of similarities is acceptable, and other part where the opposite is the correct outcome. Both parts would be spread here and there among different and non-consecutive paragraphs, so is not a trivial task.
Blocking an article because a phrase has been used previously would be like forbidding musicians to use the word "love" in the lyrics of their songs from now on.
Papers even sneaked through when the AI generated text was not even readable. Anyway; this is a very hard problem which is getting harder all the time: so many startups fighting detection.
Reasonable worries about fraudulent work have led to some good ideas -- perhaps the ones discussed in the Springer Nature article -- but also to some that I think are silly.
I reviewed something recently for a journal that asked me to do the usual thing, selecting a pulldown menu for the outcome, giving me space to write notes to the author and editor, etc. That's all fine and good.
But they also asked me to state whether I judged the work to be "correct". This journal deals with arcane numerical simulations of physics problems. Between them, the authors had several person-decades of experience at the cutting edge of this field. Without access to their computational resources (and technicians and programmers to support the work), and without a year to try to mimic the work, there's just no way I can get close to saying the work is "correct".
And what's the point? If I thought the work was incorrect, of course I'd point that out.
I think the question is just the journal's response to public scrutiny of science. A way to give the impression of vigilance, without requiring more work from the publisher.
PS. if the journal starts insisting that authors provide enough material for reviewers to check the work in detail, they won't get any reviewers. We are not paid, and get no recognition for this work. We don't have the time or inclination to wade through hundreds of pages of notes about how models were set up, or to watch perhaps a hundred hour of videos of the authors discussing alternative methods. And we don't have the funding to buy specialized software, to pay server costs, and to pay technicians to get things working.
PPS. sorry, I'm on a bit of a rant. The rant I wrote back to the editor was shorter and less rambling :-)