I've had arguments about this and don't understand the worries. Although it depends on what people do with it specifically and the context, most uses of LLMs are perfectly reasonable and legitimate, and quite comparable to other tools like spell correction and CAS calculators. In exams, you might not want to allow them, but in almost every other context they make sense and everyone will use them soon anyway.
There is one caveat: If the LLM counts as an author, then it needs to receive proper attribution in academia at least as a co-author. But so far I wouldn't say it is possible to create anything of quality solely with an LLM and these models are used for partial content like style and spelling correction software.
To put it another way, I see no reason why a writing instructor shouldn't base their assessments on the quality of the writing. Who has written it and how much the LLM can be attributed as an author is another question and doesn't really concern the writing process. Writers will soon use LLMs in all domains, just like they transitioned from typewriter to word processor. They will be integrated into every major word processor anyway.
The old way was never perfect. Writing short essays and grading them by hand was never a facsimile of any real world task, either in academia or elsewhere. There was only a hunch or tradition -- and likely a weak one -- that the exercise had some useful correlation to real tasks, or that students could adapt from their training exercises to real-world exercises themselves.
And the teachers -- especially at the college level -- were never trained to teach. They learned teaching by trial and error, attrition, and the age old process of mimesis. Thus there is no mechanism for training teachers, or for developing new teaching methods.
Now the teachers are utterly unprepared for this kind of technological revolution. They were barely prepared at all for straightforward plagiarism, and now this. They can reasonably anticipate that they will receive no training or support to learn how to adapt, while also being told it's their fault for not figuring it out. They're completely on their own.
All they will hear from the tech world is: You now have the wrong methods, adapt or die. Here in the tech world, we adapt to new technologies all the time, because the new stuff is really pretty easy to learn, like the old stuff was. In the case of teaching, "adapt" means change careers.
I also think they will, not for the main creative work but that it makes lots of tedious exploring work a lot more easy, even if in the end you discard all GPT output.
There is one caveat: If the LLM counts as an author, then it needs to receive proper attribution in academia at least as a co-author. But so far I wouldn't say it is possible to create anything of quality solely with an LLM and these models are used for partial content like style and spelling correction software.
To put it another way, I see no reason why a writing instructor shouldn't base their assessments on the quality of the writing. Who has written it and how much the LLM can be attributed as an author is another question and doesn't really concern the writing process. Writers will soon use LLMs in all domains, just like they transitioned from typewriter to word processor. They will be integrated into every major word processor anyway.