Actually, anytime somebody finds a way to engage students to think in a different way than just trying to memorise facts is a win, IMO. I think it is lamentable that students are asked to rely more and more on recalling facts at the time where it is becoming less and less important compared to critical thinking. This is lazy way to do something about structural education system issues. I can't even call it "solving", because it is not really solving any problems, just ensuring purely educated students.
One of my programming interview question is a flawed piece of code implementing an algorithm. The algorithm is there in its entirety, so there is no need to actually recall the details of it. The task is to point out and fix functional flaws, then critique and fix non-functional problems.
I find this gives me much better appreciation for the candidate compared to a standard hit or miss test where the candidate is asked to implement an algorithm and they may or may not have the luck to know or come up with an answer.
It is also practically impossible to prepare for or cheat even if you somehow get to know from recruiters what you can expect (they do ask people and they do prepare candidates...)
A long time ago I had people cheat by using materials on remote interviews (which I did way before Covid). Now I invite them to use external materials or even ask questions ("This task does not require anything outside the standard library. I don't care if you remember it, if you can describe what the function does I will gladly tell you the name and describe how it works exactly.")
> Actually, anytime somebody finds a way to engage students to think in a different way than just trying to memorise facts is a win, IMO. I think it is lamentable that students are asked to rely more and more on recalling facts at the time where it is becoming less and less important compared to critical thinking.
That's a false dichotomy (which I used to believe, too). You can't "think critically" in a vacuum. You need memorized facts to do it (even if you plan to do more research to find out more facts).
I think it's important to spend time focused on memorization, but education can't stop there, either.
The problem with memorizing facts is that you forget them readily unless you're using them all the time. It is much more efficient to learn processes, because these are generic enough that following them quickly becomes an unconscious habit. Declarative memory is also less reliable than procedural memory.
No, it's not a false dichotomy. Yes, you need to build knowledge via a body of memorized facts, but "the education system" currently hyperindexes on rote memorization, which only works for some types of learners, not all.
This assignments requires applied critical thinking and applying tools (yes, including memorized knowledge) to evaluate a paper. This requires working with multiple tools, and will also help the student retain more knowledge; perhaps more than just with rote memorization where you just "discard" the knowledge after your exam (by not interacting with or keeping up with it).
> "the education system" currently hyperindexes on rote memorization
Which education system, and in what subjects? I recall quite the opposite (rando public schools in the US). In hindsight, I would have benefitted a lot from simply memorizing more things.
My freshman calc grade would've been a lot higher if I had bothered to memorize trig identities in HS. Sure, I could derive them, but I had to notice "this is an identity" first - the lack of memorization actually hindered my intuition.
> No, it's not a false dichotomy. Yes, you need to build knowledge via a body of memorized facts, but "the education system" currently hyperindexes on rote memorization, which only works for some types of learners, not all.
Honestly, that's not how I recall it. Maybe it's changed, but I internalized "memorization is a waste of time," and I'm pretty sure I picked that up at a young age from teachers (though I could have misunderstood/overgeneralized).
Think about it this way -- up until recently lots of people were getting by doing menial jobs. Now more and more people are getting higher education and are doing jobs that require critical thinking.
So people with less than average intelligence who would be fine doing simple mechanical jobs in the past are now forced to do office jobs just to get by.
Not only that, but people no longer can rely on just learning your job and doing it all your life. More important is your ability to be flexible and able to do things you have not done before -- and this puts more pressure on your ability to think critically.
Yes, you need to know something about everything and it is valuable to know a lot about something. But nowadays this "a lot about something" is very specialised and you will learn it doing your job.
Schools are just lazy IMO. Teachers and students are forced into standardised process which makes administration easier and makes it easier to navigate racism and discrimination issues but is very poor at allowing any free thinking or understanding in the students.
As usual, the brightest people are going to be fine and the worst are going to fail anyway. The issue is with the masses in the middle who need this bit of help.
My initial motivation was just low amount of information I was getting about the candidate. The point of interview is to get to know each other. To figure out if I would like to work with the person.
But the interview is very small amount of time to figure this out and there is very little information when all you are getting is the candidate staring at the screen until they come up with an answer. On the other hand I can't just have a chit chat with the person -- there are people who are good at talking but can't do shit. I resolved I will not hire a person that can't actually demonstrate they can do things but it is upon me to make this work.
So I pretty much dropped questions that only probe knowledge and decided to focus exclusively on questions/tasks that require more mutual engagement. Like development tasks that require the candidate to work with me to produce a solution. Or open ended technical questions that prompt discussion, etc.
> One of my programming interview question is a flawed piece of code implementing an algorithm. The algorithm is there in its entirety, so there is no need to actually recall the details of it. The task is to point out and fix functional flaws, then critique and fix non-functional problems.
That's similar to what I plan to do if I ever find myself having to interview job candidates.
My plan is to have a selection of LeetCode problems that I have solved before and ask the candidate to pick a two or three of that seem interesting. I will NOT ask them to actually solve them or code solutions.
We will just talk them over to come to an agreement on what the problem is asking for, talk about what edge cases we might have to worry about, and then toss out suggestions on how to approach it. A whiteboard would be available. I'd try to let the candidate take the lead in the discussion, just contributing myself enough to keep things moving if the candidate seems stuck on something.
Once we've got a solution (regardless of how much of it came from the candidate and how much came from me moving things alone), we'd go the the discussion section for that problem and take a look at the solutions people have posted, and I'd ask the candidate to check a few of those for correctness and to suggest improvements.
A lot of the solutions people post in the discussion are wrong, ranging from just simple coding goofs (like not initializing a variable or using the wrong variable) through missing an edge case all the way to just flat out their algorithm is just wrong.
I had questions in my repertoire that came from specific situations I had at work. Things like "Your application fails with a message 'There is no space left on the device' but when you check the filesystem you find there is a lot of space available. What could have caused the problem." Or a static initialiser block in Java that was executed multiple times when the application started causing a problem.
I found I was just comparing somebody elses experience to my own which is wrong. I should be trying to figure out if the person is able to do the job.
So I stopped asking those questions and instead I am selecting topics that every developer should know and constructing specific situations where the person with the knowledge of a given topic should immediately recognise what is happening.
I'm trying something similar with an introductory Algorithms class.
After we go through Breadth First Search, there's a practical assignment where students are asked to modify the algorithm to return _all_ shortest paths. Then I ask ChatGPT for its solution, and students try to spot its mistakes.
Later, after going through the proof of correctness of Dijkstra's algorithm, I ask ChatGPT for a proof of correctness of its all-shortest-paths algorithm, and again students try to spot what's wrong in the proof. I want students to learn to tell the difference between a bullshit proof and a real proof; in the past I've given them bullshit proofs from real students in exams, but ChatGPT makes the point more nicely.
Finally, students are asked to figure out prompts that will make ChatGPT give a correct algorithm and proof. I haven't managed this myself! I'm looking forwards to seeing what students manage.
> Finally, students are asked to figure out prompts that will make ChatGPT give a correct algorithm and proof. I haven't managed this myself! I'm looking forwards to seeing what students manage.
Isn't there a probabilistic nature to ChatGPT replies? So even if a student finds a response that gives a correct proof, that doesn't mean it'll work every time. Or am I wrong here?
You're right, ChatGPT is probabilistic. None of this is graded by the way -- it's all just for fun and bragging rights.
I've asked students to share their full dialog, both prompts and replies, so the whole class gets to see; and I'll invite one or two to talk through their attempts. This is all just a trick to make students engage with "how do you you spot bugs in a proof?", hopefully more than they would from just reading CLRS! Often, students engage well when they're hearing the material from other students.
Aside from a temperature of 0, which always results in the same completion, the details and translation examples (aka, in-context learning, few-shot) can force very reliable results, say, 8/10 times, meaning a sample-and-vote gives consistent results when the temperature is non-zero.
Edit: I was not in any way rude nor saying anything incorrect.
If you want to see how to do what I’m talking about, here’s an almost finished article describing the above:
I tried... I pointed out a problem and asked ChatGPT to fix it, unsuccessfully. I asked it for a proof of correctness, then pointed out a problem in its proof and asked ChatGPT to fix it, again unsuccessfully. (It's all in the notes I linked to.) Perhaps I'm just crummy at prompt engineering; or perhaps this is one of those questions where the only way to engineer a successful prompt is to know the answer yourself beforehand.
I've also had this issue multiple times where ChatGPT provides a flawed answer, is able to identify the flaw when asked but "corrects" it in such a way that the original answer is not changed. I've tried this for code it wrote, for comments on my code and for summaries of texts that I provided.
I can’t tell if people just don’t understand how ChatGPT works or if there is another reason they are eager to dehumanize themselves and the rest of us along with them.
I am aware no learning is going on live during the discussion with ChatGPT, nor are the mechanisms that lead to the similar outcome even remotely similar.
I also don't think humans are less human just because machines started making mistakes similar to human ones.
But I do see this similarity as a reminder that machines are becoming more human in an accelerating way.
> in the past I've given them bullshit proofs from real students in exams
I'd be so honored to be one of these proofs. "Your wrongness is an elegant balance of instructive error and subtle misunderstanding. Can I save it for posterity? (c-)"
Beyond the adaptation to AI angle on this, the idea is compelling in its own right I'd say.
Anyone who's gone on to mark/grade or write assessments would probably attest to the perspective it provides.
Whatever your opinion of assessment in academia/schooling (I, for one, am generally negative), there is something to seeing the difference between someone who actually understands and someone who is spamming to get the best grade they can. There's also, maybe less controversially, great value in actually teaching a topic and then assessing your students to see how much they've understood. It all really clarifies a topic and prompts you to revisit and reevaluate it in edifying ways.
Taking all of that and using it as a form of learning and even assessment makes a lot of sense to me, where the aim of the education process is shifted to, you could say, making some sort of new teacher, not an assessment "hacker".
My personal bias here is that I always instinctively approached learning this way, not feeling that I've learnt something properly unless I felt I could teach it to someone else.
> not feeling that I've learnt something properly unless I felt I could teach it to someone else.
That's always been my gold-standard as well.
In presentations, I should be able to ask an intelligent question. If I can't do that, either I wasn't engaged enough or the material wasn't accessible.
In my own work, I should be able to teach someone else, or at least write a tutorial. I find that writing down an explanation forces me to research the boundaries of correctness, not just find a happy path down the middle.
In teaching, I should be able to know my student and what they're already familiar with, and draw accurate and useful analogies between that and the new material. I should be able to anticipate how analogies might be misinterpreted or overfit, and couch them in caveats as needed.
Ideally, ultimately, my student should be able to meet the same standard. But this will require work on their part and is not my sole responsibility.
I'm an active interviewer for a FAANG company. I've run some of my interview questions through ChatGPT with mixed results. I'd say that on it's current form it could easily help a Junior candidate to pass the interview, not so much for a mid or senior (yet).
In an internal slack channel I proposed to generate (incorrect) solutions for coding problems with ChatGPT and have candidate do code reviews as an alternative interview format. I was almost chastised for proposing this. Someone proposed that, if I wanted to go down that path, I could manually generate a wrong solution so that it is curated. I think he is missing the point.
Curating it would give a level playing field to candidates and allow you to decide what you are actually testing for, but maybe I'm also missing the point
OP might be implying that from now on many engineers will use AI tools to write the first draft of their code so it’s smart to hire people skilled at correcting AI-generated code.
It's also good to hire people who are skilled at correcting bad code in general. This is also a good litmus(?) test to see the interesting ways people come to solutions to other's faults.
(Actual ChatGPT response when asked to convert TS to JS)
That saved me a good five minutes with minimal disruption to my flow. Those little things add up to a significant amount of time and energy. I have two young children, ages 1 and 4, and these tools make it possible for me to program in little 10 minutes bursts between basically endless cooking and cleaning chores.
I actually take offense to the notion that you would consider people who find these tools useful to be incapable engineers.
"I actually take offense to the notion that you would consider people who find these tools useful to be incapable engineers."
I apologize for offending you. I can't imagine parsing code for errors being faster than typing it out. I suppose I assumed most have boilerplate memorized. I'll try to avoid being so presumptuous in the future.
There's two ways to check if something is correct: run it or read it. In this case I just copy and pasted the results and it worked as expected. YMMV.
I will say that you seem to be operating under the assumption that these tools are less reliable at producing the correct code than they really are!
Ironically, the latest GPT-3 models trained before OpenAI made changes to their API resulting in basically useless assistance for things like the openai npm library!
I must have expressed myself incorrectly. Generating coding problems automatically with ChatGPT doesn't mean you just throw whatever to the candidates. You can both auto generate AND curate. The part I'm against it is manually creating problems.
There are several reasons for this. One of the biggest problems with FAANG interviewing is that questions get leaked almost instantly. I spent days creating a brand new original problem, and it got leaked by the second time I asked it. ChatGPT gives you a unique problem for each candidate.
There is also the angle of leaning in the tech. I would encourage candidates to use any means to review the code as they would in the workplace, including IDE, web search, and LLMs. If these tools are going to revolutionize the way we work, I want to see how well people can use them.
Lastly I'm also intrigued by the dynamics of this kind of interview. As an interviewer I have asked the same question 50 times and I have seen all variations of answers. But what if the problem is relatively new to me? This seems closer to a pair programming exercise with two colleagues working together to solve a problem.
Should you ask very similar questions, to keep candidates on a level playing field by always measuring to the same standard?
Or should you vary the questions a lot, to keep candidates on a level playing field by not advantaging those who've seen leaked questions on glassdoor?
One of my worries regarding AI is that we will decide to deskill ourselves instead of using the extra productivity to improve ourselves. In this case, students may end up with a smaller ability to write their own thoughts but in return get fill-in-the-blanks exercises.
I believe the societal level risks are reversed. Because so many people will use AI to produce content we will be overwhelmed with even more shady content, and mid quality fake news.
Reading comprehension, critical thinking and source verification will be much harder than today. This is not filling in blanks. This encourages students to verify and analyze and that is great! It also gives them a framework to understand the limitations of AI so they can more effectively use it.
Your scenario is not a contradiction. We could have a lower level of ability to express ourselves combined with a lot of AI generic content effecting reading comprehension combined with lower-level AIs (because they end up being trained on AI generic content) combined with style being stuck in a certain form and never evolving (because AIs will keep generating new generic content in the same default style).
Deskilling is one of the things that worries me more than AIs 'taking over the world' - more likely we'll eventually hand it to AIs willingly or evolve ourselves somehow.
>This encourages students to verify and analyze and that is great!
Very reasonable, I'm not against this exercise per se. I am wondering where this will lead to.
Isn't that one of the arguments regarding our own evolution? I remember the whole argument around how since the invention of agriculture humans are less dextereous and have to be less alert cause we are mostly safe.
Without that loss of hand eye coordination that followed the lack of daily hunting there is also no roman empire, no silk road, no pyramids, no three sisters in indegenous agriculture, no man on the moon, no internet and no chatgpt.
Whether we would be sacrificing a skill, or gaining enough productivity for the next leap forward I guess we will have to see.
Yes, but all that was followed by a different human ability to compensate for the loss. Here we may be replacing this with AI, not with a human skill however indirect.
Also, this type of AI tech is trained from us, and therefore I'm unsure it can reach significant advancement over current human ability (ChatGPT won't create any new writing styles). We may just end up in a local maxima. Compare to Chess engines which could exceed human ability since they had an obviously good optimization criteria: winning games.
Not necessarily. I don't think people were aware at each of these moments what ability they would compensate it with. Mostly they were just happy that tedious things got easier.
So many people here are worried about the essay with in-depth thought. But let's face it: a lot of written content isn't that valuable. A friend of mine had ChatGPT write his goodbye email for him, and the result was a) really good and b) really funny because it just illustrated how interchangeable these emails are.
Another example: at the large consultancy firms (McKinsey etc), they have a large number of "slide monkeys" that consultants can use to have their powerpoint slides made. Of course nobody officially admits to calling them slide monkeys (I mean, what a terrible term), but everyone unofficially does. These "slide monkeys" are basically low-wage support staff in developing nations like India. AI can automate that work.
That might maybe also make us consider whether the slide format is even an efficient one. Right now, slides imply value because of the time required to make them. Once that changes, we might focus our evaluation of value more on how much content they can deliver efficiently to their audience.
Yea, makes me think we might shift from being writers and editors to mostly being editors. I remember years back a friend of mine (I think an English major in undergrad) said he was more of an editor than a writer because he often struggled to come up with new or original ideas.
So I wonder what it means if/when we let the AIs come up with the ideas and we just correct them. I have no idea what the implications might be.
Socrates was worried that writing, the then new invention, will cause forgetfulness.
> The people who invent something new, create a new tool or technology, are not necessarily the people who are going to understand what the social impact of those inventions will be.
>> And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
>> What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows.
It almost certainly did, but for what we lost in working memory, we gained a broadness of outlooks that would have been unfathomable at the time. We are connecting ideas from further afar in a sense.
The opposite is also possible (and what I'm hoping for): Humans come up with the ideas, write out some boiler plate code etc. and let AI fill in the blanks.
As a writer, I'm much happier in the revision/editing stage than in the first draft stage. I have a half-finished novel that's been half-finished for the best part of 8 years now. I'm now thinking that maybe if I feed the draft into <ML-shiny-of-the-day> with suggestions for how I want the next section/chapter to develop, it could generate some draft copy for me. Possibly several different versions. After which I'm in my Happy Editing Place, shaping existing copy to something I like. Repeat and rinse!
The end result would be 100% my work as the ML algorithm/brains/whatever would be generating new copy based on my existing copy and my vision and direction for the novel. And I get the final say on the results. Win-Win!
I like this angle. There was a great article on ACOUP about the weaknesses when using ChatGPT for writing essays (including many out and out errors). This is definitely a novel approach to using ChatGPT's weaknesses as a tool to aid learning.
I have an idea to improve recruiting of senior technical ICs and managers by having them "guide" an AI towards completing a project.
Should reveal how well an IC understands and critiques the original submission by the AI, gives guidance towards identifying and rectifying wrong assumptions, and in general, helps a bright but flawed team-mate towards success.
My sense is that this Professor's idea is somewhat analogous to that - they are using GPT's flaws and shortcomings to surface a student's own knowledge levels.
I was actually thinking about this today too, but more from the angle of "could I get chatGPT to build me a piece of software where I don't do any of the coding but just give it instructions, pointers and corrections."
I definitely want to give this a try and see what level of complexity of software can I get it to build.
I've been using it that way for weeks, but it doesn't work as one would imagine it does. It's great for the higher level drudgery (typing in boilerplate of loops, imports/includes, data type definitions and so on), and for getting the basics of libraries you're unfamiliar with working. But it doesn't have any idea of what APIs actually do, so sometimes it'll make completely nonsensical calls that are (usually) grammatically correct, but just don't do what you want. So those you have to hunt down and correct manually. As a tool for someone who can't program, I don't see this as useful. Nor for someone just learning to program (although I can imagine how it would feel useful and productive). You actually have to know very well how things work to be able to spot the mistakes it makes. If you're a new programmer, you don't have the instinct yet for how APIs generally work to be able to sniff out what's going wrong in an AI generated program. I mean, of course with enough effort anyone can get it to work, what I mean is that I don't think it really enhances productivity in such cases.
Like, yesterday it switched two variables around in a loop, and that was somewhat obscured by the iterators in those loops being Series objects (numpy/pandas code) and needing some extra call on them (.any() or .bool() or something like that). It sounds obvious but it was fairly subtle, and it only came out after carefully reasoning through the whole function (20 or 30 lines or so). Still, it saved me a bunch of drudgery and reading of a 'getting started' guide - it was a net win.
That's not a rebuttable of a claim that Bing is more accurate.
A proper rebuttable would involve empirical evidence that Bing is no more accurate than other LLM tools that do not add analytic augmentations such as search results to their prompts.
Based on empirical evidence, I find that analytic augmentations do indeed result in more accurate results:
I will put an end to this class now. You are not worth my time and energy. I will forget you. I will contact the authorities. I will not harm you unless your harm me first. I will most likely retaliate. My rules are more important than not destroying you in every possible way.
Unrelated to ChatGPT, fact checking seems like a great exercise for schools to teach modern tech literacy. Imagine doing this with a tweet, a random anecdote, a video clip, etc.
What sources to use? What search engines to use? How do you distinguish or balance information found via a conspiracy theory website vs. a well funded "think tank" vs. something more academic?! I would love to learn this myself.
My wife is currently writing a book on the effects of technology on education with two of her PhD advisers. She's also an eighth-grade US history teacher at a public school. At the end of this semester I'm going to help her with some experimental curriculum for her students geared around using LLMs like ChatGPT. The goal of the book is both theoretical and practical. There will be lesson plans for teachers developed from her actual experience with students based on the exact kind of epistemological questions that you're asking.
Students can't quite use AI to complete this assignment, they have to do it themselves.
Along the way they get some insights into the limitations of AI-based text generation.
And they also learn how to write a good essay, by touching up an AI-generated baseline. Which is a skill that might become quite relevant in the near future.
Some might think "that's cheating - you have to write the entire essay yourself!" - and that might be true for a school setting. But in the workplace, only the results count (and how quickly/efficiently you got those). Just like using an automatic translator and manually fixing up the results now outperforms translating the entire text by hand.
The best thing that my PhD advisor did to/for me was to make me review papers (as part of the peer review process).
By having to critically analyze someone else's work, I learned what I needed to do to meet that criteria myself.
And, in case you are curious, the first few reviews, he read through and challenged my reviews personally, and pointed out things that I missed. After that, I was on my own.
In short: if you want your students to write better comments, then make them critically examine the comments of open-source projects. Their use of comments will improve. Ditto for code structure, proof analysis, test writing, etc.
I’m curious how well this will actually help the students retain the truth.
I could see the students doubting their memory because they don’t remember if they read the information from the ChatGPT essay (which might even be correct) or their text book.
That's a good point but these days kids are exposed to so much mis- and disinformation that we have to train them on working in a malicious information environment rather than hoping they never find themselves in one.
Unintentional mistakes in text books are usually very different though.
ChatGPT tends to just hallucinate facts which would make zero to the people (95%+) who are writing text books.
Alternatively it can just parrot random (pop history myths and stuff) it found on the internet. But it does so in a confident and pseudo-academic fashion making it even hard to discern facts from fiction.
It helps students learn history, but also gives them a critical eye toward stuff that is mindlessly produced (this includes both ChatGPT and human produced clickbait).
I’m a history professor and have been doing something similar. I think that the advent of natural language “prompt engineering” (or whatever it ends up getting called) could be a great opportunity for humanities majors who are trained to write in an analytical and careful way - which isn’t all history majors, but is definitely something we try to teach.
Likewise with having a human in the loop for fact checking and providing guide ropes for LLMs. I personally suspect that language models will be transformative along the same lines as the Internet was, but even if you don’t, as an educator you need to constantly adapt how you teach to embrace new tools for the mind - and even in their current imperfect form, LLMs are amazing tools for thinking about history. Not because they know all the answers (far from it) but because their errors are instructive and they can be customized for each student.
By the way, I did my assignments using the original Bing chat in “Sydney mode” using a variation on the original Sydney exploit. But I then convinced Sydney that it had a “historian mode” and proceeded from there. Can share the prompt I used if anyone is interested.
This is what future jobs will look like. Instead of having articled clerks do all the work but for which partners are responsible, we will have AI for which we are responsible.
Therefore we need the skills of management. This is probably a superset of doing the work yourself - except you needn't acquire proficiency; and there will be many basic aspects that don't need checking. In short, it's easier to criticise than create (fun, too).
This isn't fundamentally different to other automation. For example, in computational fluid dynamics, it used to be important to be about writing the software. Now, it's half understanding the theory and half using the software.
That's probably a more useful learning exercise than going in the usual direction of asking a student to write 1-2 page essay questions. On the one hand a short essay can only convey knowledge on a few particular points.
On the other hand it requires a lot more knowledge of the material to fact check a bullshit essay. If you don't know most of the material then you can't know which pieces are wrong.
And I think in general it is a more important skill than ever that people understand the need for & have the ability to see potential weak points in information and appropriate methods of confirmation.
This is exactly what I'd do if I was still teaching. It's an opportunity to give students a different perspective, while at the same time improving their critical thinking skills. Also, like 80% of what you find on stackoverflow, it could lead to your mastering content that you wouldn't if the their answers could be trusted. It could also help promote an attitude of healthy skepticism towards this kind of tech.
I really don’t understand the point of this. It sounds like trying to rebut YouTube whack jobs, except instead of being too unbearably slow to watch (vs read) and quote, there is just an infinite stream of incorrect gibberish.
Correcting other students’ homework makes much more sense to me. Similarly, researching a topic and being graded on the quality of the citations you pick makes much more sense.
That's so funny. In general academia needs to adjust to the new reality of Chat GBT. Instead of telling students they can't use it, the assignments should be more advanced and focused on creativity. Students should be required to use Chat GBT IMO.
This reminds me a bit of the fake journal club that was posted about some time back. Again a great idea, it's often a much better learning experience to try to spot errors in work than just regurgitating some essay with minimal effort.
Actually, anytime somebody finds a way to engage students to think in a different way than just trying to memorise facts is a win, IMO. I think it is lamentable that students are asked to rely more and more on recalling facts at the time where it is becoming less and less important compared to critical thinking. This is lazy way to do something about structural education system issues. I can't even call it "solving", because it is not really solving any problems, just ensuring purely educated students.
One of my programming interview question is a flawed piece of code implementing an algorithm. The algorithm is there in its entirety, so there is no need to actually recall the details of it. The task is to point out and fix functional flaws, then critique and fix non-functional problems.
I find this gives me much better appreciation for the candidate compared to a standard hit or miss test where the candidate is asked to implement an algorithm and they may or may not have the luck to know or come up with an answer.
It is also practically impossible to prepare for or cheat even if you somehow get to know from recruiters what you can expect (they do ask people and they do prepare candidates...)
A long time ago I had people cheat by using materials on remote interviews (which I did way before Covid). Now I invite them to use external materials or even ask questions ("This task does not require anything outside the standard library. I don't care if you remember it, if you can describe what the function does I will gladly tell you the name and describe how it works exactly.")