Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Scientific sleuths spot dishonest ChatGPT use in papers (nature.com)
99 points by pseudolus on Sept 8, 2023 | hide | past | favorite | 88 comments


The big news here are that there are journals in the Springer and Elsevier network that do not read the papers they publish whatsoever.

For this service of copy-pasting text into a digital PDF they charge their crazy premium. This is amazing. In a sane world this would finally destroy their reputations so we could replace them with a reasonable solution for evaluating researchers.


> The big news here are that there are journals in the Springer and Elsevier network that do not read the papers they publish whatsoever.

This is only news to people outside of academia. Anyone who has ever submitted a paper has gotten comments that can be attributed to the reviewer either being braindead or the reviewer not actually having read the paper. Since it's safe to assume said reviewer actually tied their shoes and put on their underwear the morning they reviewed the paper, it's also safe to assume it's the latter and not the former.


Exactly. Journals have sucked for a long time and practically all of academia just went along because publishing lots of stuff is the only way to have a career. The big news is that the rest of the public is starting to catch on how their tax dollars (that were supposed to go to research) end up in publisher's pockets even though they provide hardly any real value.


> Exactly. Journals have sucked for a long time

It's funny people are raising their pitch-forks about the journals. I guess it's not widely known/understood that the journals aren't reviewing anything ever. So when I say reviewers I'm talking about PhDs in academia that do this work as "community service". So if peer review is fundamental to the scientific method then your tax dollars are being wasted but the bulk of the waste is on worthless, lazy, BS spouting academics not journal admins.

> and practically all of academia just went along because there were no other options.

Arxiv has been around for a long time now. Do you really think "the academy" couldn't have come to some consensus on how to allocate review duties for papers that are uploaded/stored/submitted there? The fact that they haven't strongly indicates that lazy ass peer review as orchestrated by the journals is to the benefit of the academy.

Journal (and conference) reviewers are completely anonymous and there's absolutely no appeal process (rebuttal phase is BS). Does that sound like a democratic/fair/auditable/meritocracatic setup to you? Sounds like review by inscrutable tribunal to me. So if the process were open/community driven (ie democratic) then people wouldn't be able to hide from the shitty job they do.


> your tax dollars are being wasted but the bulk of waste is on worthless, lazy, BS spouting academics

Correct me if I'm wrong, but isn't peer review usually volunteer work? With the job of the (usually paid) editors to select reviewers with a track record of basic competence.

> Do you really think "the academy" couldn't have come to some consensus on how to allocate review duties for papers that are uploaded/stored/submitted there?

Actually, there are several proposals! One such system is https://openreview.net/about – it's not the first I've seen, but it's the most recent. (I first learned about these things from a fictional proposal in a Pokémon fanwork: it's far from the realm of niche.)

> Journal (and conference) reviewers are completely anonymous

To reduce the incentives for nepotistic anti-scientific behaviour. The system's flawed, sure, but it works how it does for a reason.


> with a track record of basic competence.

You need to ask yourself how that track record is constituted ie what metrics decide basic competence. Hint: it's not anything that resembles an objective measure of ability or diligence or literally anything. Another hint: it's *extremely* common for PIs to pawn review duties off on grad students - I review papers for my advisor all the time and I have been since my first year.

> To reduce the incentives for nepotistic anti-scientific behaviour.

This is the thin veil that everyone hides behind to cover up the greater harm of inscrutability^1; everyone knows what everyone else is working on in any field. At this point in my career (I'm towards the end of my PhD) I could tell you which group wrote the paper (if it's in my area) just by reading the abstract.

And if you're thinking to yourself well what about new entrants into the field? There's no such thing - you do not learn XYZ hyper niche thing without being part of a long lineage of forebears doing XYZ thing. And if it does so somehow happen, like a student is doing something completely different from their advisor and so they're not trying to publish in familiar venues, well guess what they get face looooong odds getting their work accepted (ask me how I know :)

^1 Like come to think of it, recently I've been reading that the conservative judges on SCOTUS think proceedings should be closed. And you could make some sort of similar nepotism case about that, or some other conflict of interest type thing (in favor of closing the proceedings) but we all recognize that fundamental to a free and fair process is that it's open and auditable.


> it's *extremely* common for PIs to pawn review duties off on grad students

Unless the reviews are submitted in the grad students' names, that sounds like academic misconduct. Can you “anonymously” report this without too much backlash? (I assume the answer is no: you clearly know it's wrong, and haven't done anything about it yet.)

> This is the thin veil that everyone hides behind

There are many places in society where we do this. The thin veil of politeness, lies that everyone knows are lies – like asking "How are you?" without actually caring how the other person is, and answering "Quite well, thank you." when you're neither okay nor thankful.

It makes no sense, but people would behave worse without the illusion. I know not whether the benefits of transparency would outweigh that.


> Can you “anonymously” report this without too much backlash? (I assume the answer is no: you clearly know it's wrong, and haven't done anything about it yet.)

This is so extremely common that I don't believe that there is anything to report. I reviewed dozens of papers under my PI's name throughout my entire PhD. Now that I have completed my degree, I never want to review another paper again.


Some institutions are more corrupt than others. Just because it's normalised, that doesn't mean it's normal. (Discussed https://academia.stackexchange.com/q/5662/84223 https://academia.stackexchange.com/a/65821/84223 https://academia.stackexchange.com/q/142359/84223 )

Consider another unethical practice: sexual harassment, in many industries, was once so extremely common that there was nothing to report. Everyone in a position to do something about it knew it happened, yet it continued. People reported it anyway. Tanked some of the reporters' careers, but the practice is way less accepted now.

You've completed your PhD, so you're at less risk of backlash. Are you yet in a position where you can blow the whistle on what happened to you?


People that don't know how the sausage is made say the most outlandish things don't they lol.


Thank you to you and mathisfun123 for saving me from doing a PhD.

If it's that dishonest, I don't want anything to do with it.

Thank you both for being honest.


i had a kid (~last week) ask me to edit his PhD proposal (I guess in the EU you have to already have a project in mind?) and I said I would happily edit it for him if he first let me try to convince him not to do a PhD.

my hypothesis on the value of a PhD: if you are the kind of person that can finish an honest PhD i.e., real work that's not necesssarily novel (I don't give a flying fuck about "novelty") but actually requires you to stretch hard to achieve, then you don't need one and it will cost you years of productive/rewarding/lucrative work in industry (and 100% academia is not for you).

i don't know what the "on the otherhand" is though - I really have no idea how anyone comes away from this process thinking "hmm yes I want more of this pile of bs" so I have no idea what kind of psychopaths go into academia (for which the PhD is indeed a prerequisite).


Long ago, I used to think that in a world where I had more money than I could possibly need, getting a PhD and becoming something like a history professor wouldn't be a bad life. Let's just say that I no longer believe that would have been the case.


> I assume the answer is no: you clearly know it's wrong, and haven't done anything about it yet.

You expect a first year junior grad student (e.g., me before i was inured to it) to blow the whistle on their senior, well respected, famous, advisor? Is this a joke? You think change like this happens bottoms up from the indentured servant^1 class? lol.

> It makes no sense, but people would behave worse without the illusion.

wut. we've taken a large step off the sane road here. democratic/fair/equitable processes are open. full stop. if you want to debate me on this you're gonna have to whip out a whole lot of philosophy, history, data, etc.

^1 do you know how many phd students in the US are foreign nationals on student visas? if you had to pick between majority and minority which one would you pick? hint: you're wrong it's the other one.


I would expect to be able to raise such concerns myself, and not face much blowback for doing so; however, attitudes towards academic conduct vary around the world. There are even places where you can't report data fabrication without risking your entire academic career.

> democratic/fair/equitable processes are open. full stop.

Secret ballots. Double-blind medical trials. The sealing of juvenile court records.

It's not as clear cut as you make it out to be: what might make sense in one academic field (e.g. applied particle physics) might be total nonsense in another (e.g. partial differential equations).


> Secret ballots. Double-blind medical trials. The sealing of juvenile court records.

in the first and the third you're conflating the processes of the governing body with the experiences of the governed - the governed need to be protected from the governing body, not the other way around - that is the whole point of demanding that the governing body operate in the open. like bro politicians (as politicians) have no right to privacy, but we as individual citizens, governed by them do. how is it possible that i need to explain this to you?

the second isn't a democratic process of any kind.

> It's not as clear cut as you make it out to be: what might make sense in one academic field (e.g. applied particle physics) might be total nonsense in another (e.g. partial differential equations).

oh but it is and you'd have to be as obtuse as my reviewers to argue differently. like bro there isn't a scholar/thinker/activist on this subject anywhere in the world that agrees with you (politicians definitely agreew with you though).


Anonymous reviewers does not necessarily mean anonymous reviews.

> like bro there isn't a scholar/thinker/activist on this subject anywhere in the world that agrees with you

Not a 'bro'; also https://clauswilke.com/blog/2014/10/12/in-defense-of-anonymo... https://doi.org/10.1080/01411920802044438 https://doi.org/10.1038/249601a0 . I pretty much agree exactly with Claus O. Wilke, and haven't read beyond the abstract of the other two, but these were all on the first page of the first web search query I tried.

It's not well-documented, but even OpenReview permits anonymous reviews: see https://docs.openreview.net/reports/conferences/openreview-n...

> After the notification deadline, accepted papers were made public and open for non-anonymous public commenting, along with their anonymous reviews, meta-reviews, and author responses.

Your (incorrect) perception of an overwhelming consensus probably isn't why you hold the position you do. I'd like to hear your actual justification, if you would be happy to share it.


You are blithely jumping back and forth between government and the peer review process/academic process. So let me put it to you like this: please show me a democratic country with an anonymous judiciary. EDIT: actually the US has the FISA courts which are effectively secret courts and those are widely understood to be a great harm and totalitarian (rather than democratic).

> Your (incorrect) perception of an overwhelming consensus probably

Did you know that the word consensus does not include the word unanimous? Just like there is a consensus on climate change despite some exceptional crank scientists there is a consensus on democratic processes being open - again i will emphasize that i never claimed there was consensus about whether peer review should be open but about whether government should be (and my own claims about peer review are informed by that consensus).

> Not a 'bro';

One of my favorite things is when someone is either so pretentious or so clueless as to reject a (gender neutral) colloquialism.


> Did you know that the word consensus does not include the word unanimous?

Yes. Did you remember that you wrote this?

> oh but it is and you'd have to be as obtuse as my reviewers to argue differently. like bro there isn't a scholar/thinker/activist on this subject anywhere in the world that agrees with you (politicians definitely agreew with you though).

If you would like to share the reasons for your opinion about academic peer review, feel free.

> One of my favorite things is when someone is either so pretentious or so clueless as to reject a (gender neutral) colloquialism.

Huh, so it is. https://languagelog.ldc.upenn.edu/nll/?p=48576 Mark me down for 'clueless'. I'd seen that with “girlie” and “guy” (perhaps coincidentally, both of which match historical uses of the words), but not “bro”. It's not listed in Wiktionary (https://en.wiktionary.org/wiki/bro#English), but it looks WT:ATTEST-able: do you have a definition that could go there?


Beyond this, some reviewers use their position to take advantage of the process to suppress, slow, or preempt work that overlaps or disagrees with their own. The journal editors are supposed to police this kind of thing, but they are similarly providing a community service that is lacking quality.


> slow, or preempt work that overlaps or disagrees with their own.

No doubt - past two papers I've submitted I can basically deanonymize my reviewers based on the other papers/projects they insist my work is inferior to.


I didn't talk about the peer review process (which is indeed not done by the journal and on top of that usually unpaid). I was talking about the journal taking all that money purely for editing and publishing - a service that has become next to worthless in the age of the internet.


> comments that can be attributed to the reviewer either being braindead or the reviewer not actually having read the paper. Since it's safe to assume said reviewer actually tied their shoes and put on their underwear the morning they reviewed the paper, it's also safe to assume it's the latter and not the former.

Neural necrosis is rarely uniform across the entire organ. Adjust assumptions accordingly.


> Since it's safe to assume said reviewer actually tied their shoes and put on their underwear the morning they reviewed the paper,

I don't know, that's pretty questionable, especially since the plague.


I will not expand on the circumstances, but at one point during my career I did review a paper while naked.


They don't even copy and paste the text! At least in my experience, once done with peer-review I spend a few hours wrestling with the "print ready" LaTeX template provided by the publisher to get my paper and tables to render correctly. Often they'll have an out of date LaTeX compiler or something else in the pipeline, so there's actually quite a lot of unpaid labor involved in getting a paper into a PDF that a journal will host.

The main services they supply are basically document hosting and search functionality. Both at extortionate rates.


The main service they supply is a form of pseudo-price discovery like h indexes and more vaguely quantified reputation, needed because it's not a market economy.


I admit this is not a part of my comment I expected to have refuted


It's actually quite common in regular journalism/copywriting as well. If you've got a few big names on your resume, there's extremely high chance that whomever you send your work to for publishing simply isn't going to read it.

Of course, that isn't always the case and it certainly isn't the case when writing for some of the larger editorials, but it happens a lot. Once you establish a name for yourself, you can get away with a lot. For example, submit your first draft even though it can probably be improved upon if you let it sit.

But the person you're sending it to doesn't know that. They need the story, they know your resume, and they don't want to sit there and go through your submission examining every single word.


The reasonable solution for evaluating researchers has always been peer review, not publication. The gold standard in science (going back generations) has always been peer replication, not publication. A white paper that has been cited 0 times and never replicated is as good as scientific fake news as far as I'm concerned.

We all saw this live with LK-99. Publication isn't the end, it's the beginning. Replication is literally all that matters.


Tbf, whilst the obviously AI-written parenthical note in Resource Policy is extremely embarrassing it looks like the vast majority of examples were garbage-tier versions of the open access journals and conference papers HN insists can replace the big publishers...


It's way past due that the university system reevaluates their relationship with journals and the pressures to pump out papers.

Tying professors and researchers careers to the volume of papers they churn out is silly. The result is the antithesis of quality.

Public money is often driving most research but then the results are thrown into an antiquated pay-walled for-profit journal system.

The entire peer review system is rife with issues, from like low-level corruption (like the 'blind' reviewer asking for references in exchange for publishing) to no one even reading the paper.

When some major technology like ChatGPT comes along it's usually a good time to get back to basics while strengthening the foundation through modernization.


I think maybe the problem with academics is that the fundamental paradigm we use to think about scientific contributions is broken. That is, the genius myth, the idea of the lone contributor with the eureka moment that hasn't occurred to anyone else, and so forth and so on. So to stand out, you have to start grasping at all these secondary indicators of skill or contribution or whatever, which leads to everyone doing it, which leads to this runaway process. If you have a bunch of people who don't differ that much, but are incentivized to look like they do, what do you think is going to happen?

It's like Goodheart's law applied to an entire swath of indicators for an entire sector of society.

It's interesting but I think Goodheart's law sort of implies that the metric being gamed is being gamed because there isn't an alternative? Sometimes that might be because what's trying to be measured is difficult to measure, but one possibility is that it's difficult to measure because it doesn't really exist as it's assumed to be.

I'm not trying to say academics are idiots or something like that, if anything the opposite. People in general are just a lot smarter than — or at least don't differ as much — our current dominant schema for scientific contributions recognize.


Also there can be so much elitism, "old boys networks" and general arbitrariness under veneers of "scientific objectivity".


Be careful. As a generalist (non-scientific) writer, I've had a few AI flagged text things (sentences and paragraphs). It was only when I showed the editor that my text had not changed since writing the flagged parts in going back to 2014 and 2016 that they said, "hmm, a false positive".

This normally happened when I context switched from dialog driven storytelling to long blocks of narrative storytelling. So, people should take each problem as a unique problem and not a systemic problem rampant through the system. Also AI check tools are different with differnt thresholds.

To me it feels like AI is part boogeyman too, "What are we going to do with this?! It's clearly our Y2K-borg-Napster-assimilation-with-VHS-and-CDs moment."

I will close by saying, if you are not augmenting your life with AI, you're only hurting yourself. I can guarantee you that your competition is doing this, (both as individuals and for their teams using Midjourney, Grammarly, ChatGPT, etc). I should also qualify that I don't take the hardline ethical stance on AI things (clearly). Some people do. I don't know what do with hardliners, it's damn hard to have a conversation these days.


That is something I have been wondering about LLMs. Do they keep track of large chunks of text, they were trained on? e.g., Does ChatGPT know that "Four scores and seven years" was a speech given by Abraham Lincoln to commemorate the battle of Gettysburg? Or does some articles say ChatGPT just write "plausible" sentences.

In other words, the reverse ChatGPT that tries to identify papers written by AI. Is it really sure that the text is plagiarized or is it simply "plausibly" written by AI. That is if something like this goes go to court and the judge asks serious questions... will these AI detectors be laughed at?


Based on the long history of taking complete nonsense very seriously in forensics, I'd say the odds are pretty good that it'll be taken way too seriously, at least for other AI systems, but perhaps not specifically plagiarism detectors if a large chunk of the jury has personal experience fighting with overzealous anticheat crapware at school.


I think a lot of us would agree that using AI to help redacting a paper isn't inherently wrong. It's more about how it's done. Having an article go through peer review and get published with things like "As an AI..." shows that people don't use it to enhance their work but to produce quantity over quality and that the articles aren't really reviewed.


> It’s not the only case of a ChatGPT-assisted manuscript slipping into a peer-reviewed journal undeclared. Since April, Cabanac has flagged more than a dozen journal articles that contain the telltale ChatGPT phrases ‘Regenerate response’ or ‘As an AI language model, I …’

Not sure what is more astonishing here: The fact that copypaste makes it into a peer reviewed journal, or that the use of ctrl+f is being called "detective work"


Ctrl+F has been an important part of detective work since shortly after Ctrl+F was invented by Larry Tesler.


If the technique works...

Excel has caught lots of financial crimes. No reason to hate on old text search.


It's both of em, you can't top that off.


Why is regenerate response in the text? Clicking that button runs it again, it doesn't add to the text.


They tried to copy/paste by dragging, and in browsers, everything is selectable by default.


As if papers aren’t filled with filler BS anyway. This is kinda the perfect use case for LLMs. It’s not like the authors are using them for the actual results. It just helps when you need to turn:

“We can probably go deeper”

Into

“It is postulated that further inquiry may elucidate abstruse intricacies on the topic”


It's not like the authors are using them for the actual results indeed. After all, that would be dishonest!

From the article:

Cabanac noticed that some of the equations in the paper didn’t make sense, but the giveaway was above a table: ‘Please note that as an AI language model, I am unable to generate specific tables or conduct tests …’


I think there's a distinction to be made between case A where someone uses AI to speed up and enhance the redaction process and case B where someone uses AI to publish complete BS


It is also a godsend for non-native speakers. To sound eloquent in scientific papers in English isn't super hard but does require a level of English higher than what is required for most good science. Being able to translate jumbled, clunky, overburdened author-written sentences into a couple of clear and concise sentences serves both the writer and the reader.


I write a lot of documents for my consulting work - technical due diligence, analyses, that sort of stuff. The kind of stuff where some hallucination I forgot to double check could cause bad business decisions. Until now it didn't occur to me that I should maybe proactively declare that I did _not_ use LLMs whatsoever in any stage of working on these. Does anybody here do that kind of thing?

Declaring usage is one thing, but I think declaring non-usage might be the more valuable thing in the end.


I am a programmer/blogger. I refuse to use these "AI" things. (I haven't even accessed them once.)

I put "100% AI-free organic content" in the footer of my blog. [1] So people see that on every page.

I'd suggest something similar.

[1]: https://gavinhoward.com/


I use LLMs some for background information, parenthetical detail, etc. It's all stuff I know, double-check, and edit as I see fit. It does sometimes save some time but you raise an interesting point. If using LLMs becomes sort of a scarlet letter, I guess I'd just forgo the tool? I'm not going to lie and the benefits aren't really that great.


I don´t want to come off wrong here: I totally don't mind if people use GenAI to produce text responsibly and carefully. I do find it fair to communicate it however: As a reviewer, I'll look at something entirely written by a human and (even partially) generated by a stochastic model with different eyes. If I trust someone for example, I know they wouldn't knowingly lie to me. If they used an LLM, they might do just that without even noticing.

Personally, I love writing - gets me in the flow. I'm weird that way. But I realise lots of people struggle in the early stages of drafting something. If they have a tool that helps them, that's cool. It's just easier to spot mistakes in something someone (or something) else has written than to avoid typing them manually, is all.


I think if you're going to require disclosure though, most people are either not going to use it or lie because readers will expect the worst even if it was only used to smooth out come wording or stick in a boilerplate explanation of what a particular technology is for.


I wouldn´t require disclosure, I think that's simply not realistic, for the reasons you describe, and also because people may just forget.

I was wondering if I should declare that I did _not_ use any text analysis or generation tools in analyses I produce for clients. I'd wager people declaring that are less likely to lie about it than those using such tools are to not mention it. And someone reading that might find it helpful knowing that they can go over it with less (obviously still a healthy amount of) scrutiny. Declaring it proactively vs requiring disclosure seems like the reverse of the effect opt-out vs opt-in has on user behaviour.

My answer was, by the way, yes. This particular document was in German, but roughly translated I wrote in the first section (where I clarified how I understood the assignment and what goals I set myself):

"Aside from automated spell checking, the analysis behind this document as well as the act of drafting and writing it did not involve - at any stage - tools for automated text analysis or text generation."

I found my peace with this now, but still curious if (and how!) others do this.


I think it's fine, assuming it's true, and doesn't hurt although the day may come when you realize you're at a disadvantage if everyone else is using LLMs.

Somewhat comparable is that in some contexts journalists and industry analysts, for example, make a point of disclosing any client relationships or investments with companies they're writing about. It's also common to disclose if you were given a product or trip you wrote about.


That's a nice analogy!

If I feel at a disadvantage (as in, other people produce better work with LLMs), I'll just adopt them too. But I'm really about quality over quantity, and for my kind of work that's not looking like it's becoming that much of a niche any time soon. For other types of work - entirely different story.


> But if researchers delete the boilerplate ChatGPT phrases, the more sophisticated chatbot’s fluent text is “almost impossible” to spot, says Hodgkinson.

Well here is an idea - check the actual paper contents, rather than text patterns. If you still can't tell if it's made up by a bot, it's either not a paper worth of publishing or you're not much of a peer reviewer.


Hot take: The best thing about LLMs is that they’re going to expose all the areas of written work that were bullshit to begin with. Entire occupations and scientific disciplines will fall, one by one, initially claiming gen AI fraud, and eventually being forced to admit that there’s not any actual difference.


Pessimistic take: after the two stages you mentioned, in the third stage they will carry on BSing as usual (only more efficiently with LLMs now) and there will be no one to point out that the emperor is naked.


That's exactly the direction it's going in. As long as there is money in this, the charade will go on forever. It's not like no one knows that majority of the "science", produced currently, is with the purpose of advancing in the education hierarchy, not concerned with actually expanding human knowledge.


Spending your effort trying to advance human knowledge is, after all, a sure way to lose out and get fired. If you don't spend all your time chasing the measured goals, you won't match up to the people who do.


That isn't going to happen. People have been submitting computer-generated BS to journals for years. Nobody ever loses their jobs because nobody in academia cares, the media almost never report on it and people continue to be fed a diet of pseudo-scientific research. There's no mechanism that will bring this to an end beyond the academic scientific institutions becoming completely discredited.

https://www.nature.com/articles/d41586-021-01436-7

"Nonsensical research papers generated by a computer program are still popping up in the scientific literature many years after the problem was first seen, a study has revealed"

They say "hundreds" but the actual scale of the problem is more like tens of thousands of identified papers and probably 10 or 100x more unidentified cases.

https://dbrech.irit.fr/pls/apex/f?p=9999:1::::::


What we need is an oracle machine LLM that spits out barely intelligible high-philosophy dialectic so we can have a go at proving that people should just say what they mean without inventing pseudo-languages that only make sense to them.


You can't win that way. Humans like Alan Sokal deliberately wrote nonsense in the 1990s that was mistaken for deep postmodernism, but it didn't really change anything.


Automated Humanities!


I suspect we have some of that already here on HN.


so strongly agree. Grant writing is another good example of this. So much nonsense writing that just wastes everyone's time.


SokalGPT


Of course there's always the risk that ChatGPT would actually write a paper worth reading.

Reminds me of that oddly prescient r/xkcd comic (https://xkcd.com/810/), pointing out the same issue for spam in online forums.


This is one of those "it depends" areas.

For example, if there is a specific advantage to using fluent English text, and the submitters are not fluent in English, then using ChatGPT seems like a fairly natural way to handle that. I'll bet that Google Translate is already being used for it.

Not sure that AI is quite up to actually falsifying the research (although I'll bet it will allow fakers to do much more realistic fake data).


Yup. I'm not a native English speaker, thus I use chatgpt (and before this, quillbot or grammarly) to rephrase my sentences to become clearer, concise and have that 'professional' tone, apart from make them grammatically correct.


Even native English speakers could use it.

For example, I am constantly reading prose by extremely intelligent and well-educated people, that is almost painful to read.

One of the things that ChatGPT is great for, is generating eminently readable prose. In fact, I'm recommending to my team, that we consider using it to generate some of our help dialogs.

In many cases, I wish more folks would use it.


Whatever definition of “falsifying research” you have, AI is up to it. The best illustration is the lawyer earlier this year that assumed ChatGPt was truthful and used nonexistent references with complete text.


Fair point, but we'll see. A lot of folks will get away with it, but some folks will hose it up (like the lawyer, who may end up getting disbarred, I think).


I hope the incoming overabundance of GPT generated papers at least as a side effect gives the scientific community a wakeup call.

If publishing becomes a non prestigious enough thing because anyone can GPT can do it, then at least those who do it for reasons other than "publish or peril" will be the ones remaining. Given good enough filtering exists.


The fix for that would be to invest serious amounts of money into replication - a problem known at least since the early 2010s [1].

The problem is, replication is expensive, thankless and most importantly rewardless busywork, so it's mostly limited to cases where a successful replication itself promises value like in the LK-99 "superconductor" craze.

[1] https://en.wikipedia.org/wiki/Replication_crisis


The article confuses several things by talking about authors using ChatGPT to help drafted their papers.

I can use my friend to help draft a paper in several ways:

I can plagiarize from my friend's work, which is unethical.

Or, I can quote my friend with a citation. This is ethical.

Or, I can ask my friend to look over my paper and - without rewriting any part of it - make suggestions on how to improve it. This does not require a citation and is ethical.

Nature seems clueless about the different ways my friend (or ChatGPT) can help me draft a paper.


I can only speak for the fields with which I am familiar but in those fields, peer review is probably a negative indicator. Peer review says that the ideas of the paper are not interesting, meaningful, or true enough to stand on their own. That they require the imprimatur of legitimacy in order to be taken seriously.


I have been highly critical of using LLMs in places where they shouldn’t be used. With that being said, isn’t writing assistance the one thing we can all agree that they should be used for?

I do think that declaring AI assistance is a good thing, but even if an author does not do so I don’t see a huge issue here. Sure, you can have your LLM generate bogus data, but you are willing to do this you probably were unethical before LLMs anyways.


I've seen reviews made with ChatGPT as well, it was dramatically bad, no way it was even proofread.


Scientific sleuths spot dishonest Spellcheck use in papers


Spellcheck does not assert information as factual unless the author of the text being checked made that assertion already.

The same does not hold true for GPT.


Apples and oranges: The authors in question obviously missed ChatGPT generated nonsense ("Regenerate"), who's to say they didn't miss anything else? A spell checker helps you _not_ miss inconsequential mistakes. ChatGPT makes it harder to miss subtle yet consequential mistakes.


The examples here seem to go pretty far beyond spellchecking though.

I guess I understand the reasons for the disclosure requirement but if you're really just using LLMs to help you more quickly write some background text that you could otherwise do yourself, 1.) It's hard to see the harm beyond the tools that are in every word processor and 2.) I probably just wouldn't use the LLM at that point.


As a person using ChatGPT to write university assignments and essays myself, I wonder what is the problem with that? When I get an assignment "write 3000 words about a subject" which actually can be resumed in 500 words, the whole introduction, and sections that are just there as fill-in, can really be speed-up with ChatGPT. Yes, you have to read it, check for plagiarism, add references, but i cannot see it as bad, if the meat, is written by me.


Counterpoint, in the context of education, the goal of the assignment isn’t to produce the end product of a 3000 word essay, but to have you go through the associated process of formulating, organizing, and presenting your thoughts. LLMs give you the product, but not that process. Outside of school the product matters more though, so learning to use LLMs as a tool seems like a worthwhile part of education as well, as long as you learn enough about writing to prompt and judge the quality of its outputs.


Yeah. A university paper assignment and a 1000 word article for the company blog seem like two totally different things even barring explicit policies. (Though I'd be surprised if a lot of universities didn't have policies by now.)


I understand that, and I go through the process. Even though we know that " the goal of the assignment isn’t to produce the end product of a 3000 word essay", it doesn't change the fact that I have to produce a document with 3000 words that could be written and summarized in 1000. IMO, with such tools, we should maybe question if such "Write an essay with X words" still make sense.. I'm sure the (instructors, master and doctors) that do the review of 3000 word texts would welcome such change.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: