> some people want to outright ban anything that "seems" it was written by a LLM
... The false positives won't be pretty. Especially for ESL students whose grasp of the language might make them sound a little too artificial.
> it's hard to know how much was written by GPT and how much was edited by the student
The smartest students won't even be suspected of using an AI model. They'll use the AI to do directed work, proofread and re-write as needed. Not really different than going through three revisions with different mentors for their college entrance essay...
As a recent graduate, I read plenty of my peer’s papers that were obviously written in 1 hour at 10pm, talking in circles with no substance. This is the level of output of these LLMs, and frankly I think all of them should be getting C’s.
> The smartest students won't even be suspected of using an AI model. They'll use the AI to do directed work, proofread and re-write as needed. Not really different than going through three revisions with different mentors for their college entrance essay
If that’s what they’re using it for I propose that they’re meeting the learning objectives just as much as someone who did those things in a study group with their friends. Of course how acceptable that is depends on the context of the work.
... The false positives won't be pretty. Especially for ESL students whose grasp of the language might make them sound a little too artificial.
> it's hard to know how much was written by GPT and how much was edited by the student
The smartest students won't even be suspected of using an AI model. They'll use the AI to do directed work, proofread and re-write as needed. Not really different than going through three revisions with different mentors for their college entrance essay...