Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ChatGPT sends shockwaves across college campuses (thehill.com)
36 points by cwwc on March 19, 2023 | hide | past | favorite | 55 comments


I focus 100% of my anti-cheating attention on encouraging students to not want to cheat.

It's multifaceted, but part of it is scaring them straight--you're going to go up against an interviewer and not only do you have to convince them you know what you're talking about, you have to do that better than any other candidates who are interviewing.

And--hey--if you want to be $x0,000 in debt with no job prospects, there are faster and easier ways to do that. Why bother coming to class? Just buy an expensive sports car. At least you'll have a nice car to show for it until it gets repossessed.

Another part of it is lesson and project optimization. Overwhelmed students are more likely to cheat. So... is it possible to teach the same topic just as effectively with less time or mental effort requirements? And yes, it often is. Less can be more.

Yet another part is maintaining student engagement by being there for them. I'm lucky enough to teach at a university where class sizes rarely exceed 30, which means I can answer questions in their medium of choice very frequently throughout the day. I try to let them know I'm on their team, and we'll slay this dragon together.

Can ChatGPT solve their programming projects for them? Hell yes. And if not this year, very likely next year. And I know I won't be able to tell the difference between a ChatGPT solution and the solution of a capable student.

The only sensible option is to get them to not want to use it.

Edit: get them not to want to use it to cheat, in particular. It's a pretty powerful tool for figuring things out.


Great response. You sound like a wonderful teacher. Thank you


>Can ChatGPT solve their programming projects for them? Hell yes. And if not this year, very likely next year. And I know I won't be able to tell the difference between a ChatGPT solution and the solution of a capable student.

How about having the students store the solutions in a git repo and require 3-5 commits as they work through the problem?


That could probably fixed with a simple prompt, no?


I don't follow.


I assume they're saying to prompt the AI to give the solution in 3-5 steps approaching the final solution.


It is pretty trivial to create a Git repo with five commits prestaged; if we can do it, so can GPT


I’ve learned to embrace the technology as a prof. Even though in its current state it can be spotty, it’s a given it’s going to continuously improve.

AI “detectors” are not going to work in the long run. Students can easily use tech to rewrite.

Graded work will need to change. I believe a shift towards oral defense of a research topic is going to be needed, or in-person testing, or hands-on assignments where permitted.

Is the graded essay dead? Possibly.


> . I believe a shift towards oral defense of a research topic is going to be needed

It is done on the professional research level (thesis defense for postgraduate studies) but doing it over this scale is much easier than this being the default for all levels. It will need a lot of resources to be allocated more the currently available in the education systems.


Hey hey, thats why I am working on using the tech to help teachers with https://tadabot.email

If you're willing, i'd love to chat with an educator that has used chatgpt -> matt@recurai.com


Is it not feasible to simply raise the bar on what is expected in a graded essay? Or is the human/chatgpt combo no better than just chatgpt?


The oral defense of a topic is a romantic academic ideal that is overdue for a comeback.


That would cause big problems with students on the spectrum or suffering from ADHD or social anxiety, which in our field is... a lot of them. I am a lot less anxious now in my 40s than I was in my early 20s, but a public talk is enough to make me lose sleep for a week, despite having done it dozens of times. In my 20s it would have been an impossible task.


What doesn't kill you makes you stronger. I don't like public speaking and I have ADHD. I still wish I had to do this more because it makes me stronger. The hard truth is that employers don't care about your anxiety.


As a university student, I’m going to play the other side.

I’ve found it incredibly useful as a bespoke tutor. The professor explains something complicated or that you don’t know? Ask it mid-lecture for a simpler explanation. I’ve used it to analyze papers, explain concepts, or grade my own work. It’s a fantastic learning tool.


This seems like the right answer and a healthy perspective. Clearly the controversy is around graded work though…


Yes, but let’s not forget that the goal should be learning, not to produce gradeable papers.


I suppose this is just more of a case of ease of use.

Personally I think the only realistic / sure fire testing method would be a student sitting down with a professor and just discussing the material. Q&A, "show me what you know" or even just discussion would work... heck the student might learn more... maybe that's even a path to a better place?

That's assuming we actually want to know / care if the student is learning anything.


I think your underestimating chatgpts utility for students. It’s not just a “do the work for me” tool. It’s a “hey how can I improve this paragraph?”, “why is this true?”, “who first created X”, “If you were a mean TA what would you say about this?” Tool.

It wouldn’t surprise me if C students suddenly looked like A students. Their getting proper and timely feedback for the first time.


I wonder how much "make my paragraph better" there is when "do it for me" is just as easy...


But is this cheating or real tutoring/learning help? To me it looks more like the second, so with ChatGPT everybody actually can win. But yeah, it depends on how the exam is organized.


Someone who is that involved and desiring to learn will get an A even without help, and someone who is not will not care enough beyond "do my homework for me".


“why is this true?”

It literally tells you in the manual, including the ChatGPT-4 manual, not to use it like this because it will confidently and very convincingly give you false information.

I think there are realities and expectations which aren't lining up with this tool, it's absurd.

The manual basically tells you not to really use it for anything outside of experimentation.

This thing isn't "Data" from Star Trek.


> The manual basically tells you not to really use it for anything outside of experimentation.

It is probably a smart move to say GPT4 is not suitable for use for anything. People are already letting GPT4 write things that have legal consequence, and I think this is the best defense when faced with a lawsuit:

We told you not to use GPT4 to write your job offer letters and job ads.


It’s very good however. In the context of academic writing it’s easy to verify when it makes things up. It is also less prone to fabrication on technical topic compared to “write a biography for generic name”


How would you verify if you don't know the subject matter in the first place?


Problem is, this can’t scale. Private tuition, yeah, but there’s huge classes in college environments that would require gobs of proctors. Students wishing to learn will learn. Those wishing scores will get scores. The trick will be detecting the devalued grades.


Maybe employers will have to stop freeriding on the degree signal and instead put actual effort into hiring. That would not be a great tragedy in my opinion.


> stop freeriding on the degree signal and instead put actual effort into hiring.

90% of why employers do that is that we functionally outlawed intelligence testing of job applicants, so they instead outsource it to an extremely expensive alternative.


> 90% of why employers do that is that we functionally outlawed intelligence testing of job applicants

No, we didn’t, in fact, the ruling that is misrepresented that way has exactly as much effect on degree requirements (insofar as degree rates are also racially disparate) without evidence of job performance need as it does for IQ tests.

But, “IQ tests are functionally illegal and degree requirements with no evidentiary basis are fine” is management/HR cargo cult knowledge, which is the real issue.


How is it functionally outlawed? Most places do coding/system design/behavioral interviews to evaluate "merit"/intelligence/ability.


Functionally is doing all the work there and isn’t accurate. Do a study showing it’s related to job performance and you can go hog wild.


I dunno, I took an IQ test for a job application in 2016.


Things gotta change sometimes. Can't pretend work and testing are all genuine and can't pretend we have a clue when it isn't.

Maybe it doesn't scale with the current structure, but maybe that needs to change. Granted, I don't pretend that will be easy.


It can scale if it’s a language model that does the testing

That is, if testing people for these skills remains useful


WTF are they paying 50k/year in tuition for. Maybe put some of that towards proctors


ChatGPT becomes the test taking interface. Go to a proctored environment and get quizzed by TA-GPT in a socratic test about the subject. Humans review the scores and that's your test result.


> At the start of the 2022-23 academic year, few professors had heard of it.

Few if any, given ChatGPT was announced at the end of November...


ChatGPT is like the pocket calculator. It was "cheating" when it first showed up, but is regardless here to stay. You can either go to great lengths trying to ensure that students don't use it at home or in the classroom, or you can just hand one to them and go "now do your best".

These students are going to be entering a workplace where language models and all other iterations of AI will be prevalent and expected to be used in day to day work. There is zero reason students shouldn't also get familiar with the technology instead of pretending it doesn't exist.


Not using a calculator was a life hack that gave me advantage over others. Might be the same effect.


My 7th grade algebra teacher many decades ago allowed calculators in the classroom, including during exams. His reason, "A calculator will only help you get the wrong answer faster."

ChatGPT still spits out stuff that doesn't work, generates code that requires libraries that don't exist, and more. Used well, it can be a force multiplier, like the aforementioned calculator. Used without understanding, it will only help you make a mess faster. "I shaved 50 yaks instead of 1."


Learning how to prompt engineer for real deliverables is a trade that students will actually need.

Many of us with no real incentive to stay relevant wont do much more than read a headline or have a funny conversation with chatgpt.

But these students have to play the cat and mouse game from day one. Saving time where possible and actually learning at their own discretion, not getting exposed or expelled.


AI will quickly obviate the need for prompt engineering, since it can just mimic the best prompt engineer in the world and interrogate the user about what they really want.


Which is what midjourney already does for example


You can use it to do the assignments for you or you can use it to help you understand what you're solving. In my computer science exams I had to write correct code with paper and pencil. ChatGPT won't be able to help you in the exam, will it?


Imagine paying $50-100k for a piece of paper that you generated with chatGPT. talk about the ultimate NFT.


While it might be good for a non-technical essay, for technical matters, it has a bad habit of spewing nonsense, both in answers and citations. Professor Alex Wellerstein (of Nukemap fame) gives two anecdotes highlighting their issues.

1. https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_...

  An anecdote, but I recently was asked to review the essay of a student who I had not taught. I became highly suspicious it had been generated by ChatGPT, because it had the "feel" of its output. The clincher was that it had an entire page of references... all of which were fake. They all looked plausible, and even had URLs. But not one of them was accurate, and all of the URLs were dead, and all investigation made it clear there references had never existed. I was somewhat amazed, both at the gall of a chatbot inventing fake references, and for the student who clearly did not click on even one of the generated links, yet had still asked for an essay re-grade!!
2. https://old.reddit.com/r/AskHistorians/comments/11u21ie/the_...

  One experiment I ran with it recently was to ask it about the RIPPLE, which is a nuclear weapon design that was tested in the 1960s. The details of the RIPPLE are not public, but the fact of its existence, who invented it, and its testing are, as well as the some very broad pieces of information about it. Anyway, I repeatedly asked ChatGPT how the RIPPLE worked, and why it was called the RIPPLE, and every time it gave me a totally new and contradictory answer, freely making it up each time. After giving me maybe 6 different answers in a row it then noticed it was giving me contradictions, and from that point onward claimed that the most recent answer was correct. I was impressed at how inconsistent it was, that you could just ask it the same thing over and over again and it would just make new things up each time. The only consistency it gave me was wrong: it repeatedly emphasized that the design was entire hypothetical and never tested, which is false (it was tested at least four times).

  In a separate exchange, I asked it to ask me a question, and when (for whatever reason) I told it I was interested in nuclear weapons, it began to lecture me on how this was a topic that should be left to experts. I then told it I was an expert, and it then started lecturing me on how an expert on this topic ought to behave and think. It almost seemed defensive. I thought it was pretty rich — an impressive mansplaining simulator, indeed.
3. The full discussion outlined in (2): https://old.reddit.com/r/nuclearweapons/comments/117hssn/cha...


I like the characterization of chatGPT in this comment in the second link:

> [–]righthandofdog 46 points 2 days ago

Mansplaining as a service is the best description of GPT.

> The reason it CAN be right about more general info is because people trained it away from lies. No one has trained out the lies on more rarified knowledge, so it makes shit up.

> Trusting its answers to be correct is flatly stupid when it is literally designed to make shit up that sounds good instead of saying "I don't know".


And the reality is “I don’t know” is a valid response! But it isn’t satisfying, I remember five year-old me would get very upset with my mom if she said something like that, unless it was followed up with “but let’s go look it up together.” Unfortunately for OpenAI, that isn’t the kind of response their customers probably want to see; they would likely get annoyed and go back to Google or DDG or something else.


I asked GPT to summarize a Wikipedia page on the Willow pipeline project in Alaska and it summarized something completely different, so this is believable.


> that you could just ask it the same thing over and over again and it would just make new things up each time

Why is that described as bad or surprising? It doesn't know and so these varrious answers seem about equally likely to it.


I always think this is a bit like some kind of interrogation where the subject is trying to give the desired answer. If you are compelled to give and answer - any answer - then that’s what you’re gonna do.


I suspect the fact that it’s mostly all over the place, it doesn’t narrow toward the correct answer. Here’s his full test: https://old.reddit.com/r/nuclearweapons/comments/117hssn/cha...


GPT4 should be much better though.


We’ll probably start to see a more complete picture of how it performs in six months or so. My hunch is that it will still struggle with citation generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: