It's only a matter of time until someone comes up with a GPT that takes whatever off-axis theories a research paper writer wishes to promulgate, and searches the entire corpus of academic literature for references that can be strung together in such a way as to support any argument one likes.
A quack's dream come true, substantiating an argument by backsolving from its feeble or malevolent conclusion to a set of well-known premises but-with-citations. converting untenable speculation into something that passes many superficial tests of legitimacy, which is more than enough to boost it into broader and less critical visibility.
"thick with citations, therefore truthy" is a big blind spot in the casual heuristic used ro gauge the quality of a given piece of research writing, especially at the undergrad level where this tool, lets call it CheatGPT, would be stupendously popular.
Have you been watching American politics for the last 20 years? Tea Party to Q/Maga, they will cite any quack economist, quack scientist, or quack doctor that provides a theory that enforces their narrative, data be damned.
There is in fact a treasure throve of high level research that is to controversial for any expert to go near.
Wild things will happen if screaming hoax from ignorance can no longer shut down constructive efforts.
It will simply combine what is written about germ theory or heavier than air flying machines and produce sensible responses.
The patent db's are full of treasures if you have oh 1000 years? to study it. Maybe 10 000?
It should also be possible to take a seemingly unworkable idea that makes no sense and gather just what is needed to bring it into reality.
For stuff you can build or otherwise test properly it makes no difference what people think is possible.
People think very little is possible, we always did! Everything that can be discovered has been discovered has been the mantra for thousands of years. This while the things people actually accomplish seem to get more and more astonishing.
I understand what you describe, and it's possible consequences. But I would put forward two arguments;
1. Those who want to believe in bullshit conspiracies will do it regardless of the amount of citations in a research article terribly summarised by a clickbait web page of which they only read the headline. Any who can read more than 10words have been using and abusing google scholar for years to support their nonsense, the ability to find a reference does not equate to the ability to critically appraise the contents.
2. Science is a small world. In each specific field you get to know the big names and institutes, and those are used as a better gauge of the quality of a paper. The peer-review, for all it's pitfalls including its dependence on volunteers, does a good job at stemming a lot of bullshit. I'd proffer it's not the articles but rather the multiple for-profit publishing houses setting up multiple journals through which they funnel pay-to-publish articles that are contributing to the dilution of trust in published science. Again on that note, scientists in each field know which journals to trust and which to double check.
> scientists in each field know which journals to trust and which to double check
I've not seen much evidence of this in my own reading. Maybe in some fields, but certainly not all. During the COVID years I read a lot of epidemiological and public health papers. They all had dozens of references and would be published in well known journals like Nature, BMJ, the Lancet etc. Yet when checked many of the referenced papers would simply not validate. For example, they existed but wouldn't actually support the claim being made. Sometimes they wouldn't even be related, or would actually contradict the claim. Sometimes the claim would appear in the abstract, but the body of the paper would admit it wasn't actually true. That was only one of the many kinds of problems peer reviewed published papers would routinely have.
It became painfully apparent that nobody is actually reading papers in the health world adversarially, despite what we're told about peer review. The "a statement having a citation = it's true" assumption is very much held by many [academic] scientists.
It's a subcomponent of the very strong belief in academia that everyone within it is totally honest all the time. This is how you end up with the Lancet publishing the Surgisphere papers (a paper using an apparently fictional dataset), without anyone within the field noticing anything is wrong. Instead it got noticed by a journalist. It needs some sort of systematic fix because otherwise more and more people will just react to scientific claims by ignoring them.
This is already possible with search engines, there is enough information on the internet that you can substantiate just about any claim regardless of how much evidence there is to the contrary. (see flat-earth, plenty of plausible sounding claims with real, albeit, cherry picked evidence).
Yes of course, this is already possible with AI writing assistance as well, if you're willing to plug in some of the phrases they come up with into a search engine to figure out where they may have come from. But you still have to do the work of stringing the arguments together into a cohesive structure and figuring out how to find research that may be well outside the domains you're familiar with.
But I'm talking about writing a thesis statement, "eating cat boogers makes you live 10 years longer for Science Reasons" and have it string together a completely passable and formally structured argument along with any necessary data to convince enough people to give your cat booger startup revenue to secure next round, because that seems to be where all these games are headed. The winner is the one who can outrun the truth by hashing together a lighter weight version of it, and though it won't stand up to a collision with real thing, you'll be very far from the explosion by the time it happens.
AI criticism is essentially people claiming that having access to something they don't like will end the world. As you say, we already have a good example of this and while it is mostly bad and getting worse it's not world-ending.
A quack's dream come true, substantiating an argument by backsolving from its feeble or malevolent conclusion to a set of well-known premises but-with-citations. converting untenable speculation into something that passes many superficial tests of legitimacy, which is more than enough to boost it into broader and less critical visibility.
"thick with citations, therefore truthy" is a big blind spot in the casual heuristic used ro gauge the quality of a given piece of research writing, especially at the undergrad level where this tool, lets call it CheatGPT, would be stupendously popular.