Hacker News new | past | comments | ask | show | jobs | submit login
AI Checkers Forcing Kids to Write Like a Robot to Avoid Being Called a Robot (techdirt.com)
88 points by speckx 15 days ago | hide | past | favorite | 96 comments



As a teacher, my answer to publicly available AI, regarding grading, has been to only let work by students I have seen been done myself (or staff, trusted implicitly) count towards passing/failing/diplomas.

This means graded longer tasks such as writing or programming, which in the past were done outside of class and then handed in, are instead done in person (like most exams are, traditionally). To acknowledge the longer process, the time alotted for such a test is much longer (up to 3 hours instead of 1) and I can expect less of work created in 3 hours than work created over 6 weeks.

Students still have the opportunity to hand in work that was done outside of such an in-person test, which I will grade, but that grade isn't on the record is serves only as an indication.

I wish I could trust students, but plagiarism has become so easy now (added to a zeitgeist of personal gain above all) that a critical mass fails to resist.

I think diploma's are very useful. They get their usefulness from the trustworthiness of the party handing out the diploma. Only recording grades for work that has been done in an environment controlled by that party has downsides, but in the current and foreseeable situation I don't see a better way of maintaining a base level of trustworthiness.

I love teaching without grades or diplomas involved so much more. Then it's all about doing what's best for the students.


It's also worth pointing out that this approach is valuable in protecting against some very traditional risks too: Content that was ghostwritten or plagiarized from another human. LLMs simply made it easier and cheaper to cheat in the same sets of circumstances.

> but that grade isn't on the record is serves only as an indication.

I think that kind of output is often under-appreciated, especially by students. Even as pure advisory feedback, a grade on a subject or section helps people recognize where to allocate their limited time and effort.


I can see the value in this approach, but I also know for myself this would have been horrible in school.

I was always a horrible test taker (the pressure being a large part of that, but not all) and I often needed the time outside of class for any papers I need to write. No matter how much I tried, it just wasn’t happening in class.

I acknowledge the problem but I would be worried how this could negatively impact a lot of people.


You're right that some students fail only because of the way they are graded. Although with the right preparation this number can be quite low, it's never zero. It's heartbreaking to see some fall between the cracks, but I don't think it's feasible to get to a system that works for every single person.


Right. At my school, the accessibility option for tests is to give us more time. A three hour test would be made "accessible" by extending it to six hours or even more.

I was really hoping AI would make our world more accessible, not less.

(eta) Additionally, it would take more instructor or docent time, because no one can't be trusted to actually learn the material we're paying tens or even hundreds of thousands of dollars for.

It ain't the future dystopia I'm afraid of, it's the one we're creating this week.


Thanks for bringing this up. I nearly failed out of college before getting accommodations for anxiety. I was a physics major and was solid with math, but sitting down to take a test I’d forget how the most basic properties of multiplication and exponentiation worked. Once I started being able to do untimed take home tests (due to accommodations) I started doing well. With the pressure off I was also able to finish the tests much more quickly than before.


The question that came to my mind was "how do we give children the opportunity to think through problems which cannot be processed in two hours?"

I don't know the answer.


It's hard, but you can create tests that you can only pass if before you've processed a problem for weeks.


I have no idea what it is like now but a significant part of the testing environment at Georgia Tech in the early '80s in the highly competitive engineering fields was "impossible" exams. These involved a timed in-class exam of an hour (sometimes two) where it was not possible to solve the major points problems analytically within the time limit, but a variety of approximation techniques were available. The highest scores came from understanding how to deploy the fastest approximation techniques.

Unlike many of my classmates I was really good at the mathematical part of the exam questions and could straightforwardly solve them analytically. However, this took too much time! I bombed several important exams learning that "good enough" is an important attitude to have when solving engineering problems. I hated this intensely at the time. (Never give that Stalinist institution a dime till I die.) BUT. There's a reason I went into numerical analysis in graduate school. Which I loved then, turned into a transient career that I enjoyed immensely, and remember fondly.

I doubt this approach is feasible for non-engineering disciplines.

[edit: redundancy removal]


Teach critical thinking before the college level - ideally well before. Much of the issues with overly relying on these tools boils down to a lack of critical thought - analyzing the output of these things critically, once you reach a certain competence of writing, will lead you to quickly realize you shouldn't overly rely on it, or maybe even use it at all.


Give yourself a few more hours to think about it. These things can’t be rushed.


> my answer to publicly available AI, regarding grading, has been to only let work by students I have seen been done myself (or staff, trusted implicitly) count

Of all of the tactics I've heard teachers taking, this is the one that seems both the most effective and most fair. It's a terrible thing that it's necessary because it's a less effective use of classroom time, but it's hard to think of a better approach.


My prediction has always been that schools will shift to proctored exams, and job placement rates will eventually be the only metric that matters for students. I think it's an advantage for the schools to ignore AI for as long as they can though, since higher pass rates means more students paying the full cost of tuition, and better graduation statistics.


I think the problem is we think using another KPI will obviously solve the problem. Using job placement as a rate just encourages nepotism and self dealing though - uni's have been caught giving say, an average salary of an outgoing degree by including an NBA player who got it before making millions shooting hoops.


You're basically describing the saga of code bootcamps before they all tanked in a blaze of VC


It's really difficult to plagiarize if you write the paper or take the exam in class using paper and pen-or-pencil. It's also difficult to plagiarize if the teacher/professor/TA actually wanders around the room during the test (without stopping behind a student and raising the student's blood pressure in the process.)

Of course, that involves more on your part as teacher/professor -- you actually have to teach and test and act as editor -- on an ongoing basis. At university, that's part of what TA's are (or should be) expected to do. As a teacher, it's your responsibility.

So, if you happen to be a CS 101 instructor ask for the evolutionary copies of a program. If you happen to teach poetry and cover Herrick, ask for an original poem in the style of "Upon Julia's Dress" and see what the students happen to see from the same poem.

Teaching without ensuring that learning happens is a waste of time and a disservice to your profession.


It seems to me that AI detection is best done in the editor itself. If you track a student's mouse and keyboard there should be a clear distinction between someone plagiarizing (even if they're re-typing word for word) vs writing genuine work - which would involve lots of backtracking and editing.


Are you seeing a shift in class duration to accomplish this? For my kids, I don't see how their classes can squeeze both instruction time + work time in the allocated slots. Is instruction time being sacrificed or "replacing" homework (watching videos/reading text at home)?


A friend of mine seems to have switched to a "exercise/exam in school, learn at home" methodology, where the next lesson is given in sheet format at the end of the hour, and "school" is mostly exercises he uses to see how much each student understand. It seems to be quite happy with this, and told me good students are quite happy with this (lessons are learned at their own pace), and poor ones now have more time with him helping them during the exercises. Only issue is that he now have troubles not talking to/helping students during exams and now really hate those (always disliked them as well as grading, but now it seems worse).

He also heavily encourage student to learn with each other during "perm" hours (basically in highschool, hours where student have to stay in school but have nothing to do and can work on their own, go to the library or play/chill/whatever), but that was before and because he fondly remember the days where all 26 of us, the full class, all teamed together against homework (yeah, i had great classmates)


In principle nothing about the structure of the course has to change. You can keep an existing course with instruction during class and and working on a project outside of class. Even handing in the project and grading could stay the same, although the grade for the project wouldn't be recorded and would serve only as an indication. You then add a longer test at the very end, for a grade.

How I set up my course is different from traditional education. Most instruction happens from self study (including videos and software). Face to face time is for questions. All elements, material, staff, and students, have to be tuned for this setup to work though and is hard to pivot to in existing setups because education (in the Netherlands at least) is very entrenched.

This self-study centered learning approach, like the in-person tests instead of projects, comes from necessity. Great (and I expect ever increasing) teacher shortages.


I think you're absolutely correct.

I worry that the uncomfortable truth AI is revealing is that essay writing is a less essential skill than we'd like to believe. We can say things like "personal gain above all" and the pursuit of a diploma being our downfall, but I can't help but wonder if it would be better if we just got rid of essay writing altogether or wait until the student knows what kind of writing they want to do.

Of course you prefer teaching without grades; the only people in the room are people who want to be there.


Really it's not revealing anything at all about the value of essay-writing as a skill. It's just revealing that people will cheat in ways that are hard to directly prove, and grading writing is really hard when people have access to an infinite bullshit generator.


I don't know about this. Take away the moral gatekeeping of calling it cheating and look only at outcomes. If students use AI instead of doing it themselves, are they worse off in a material way that only essay writing could provide? If they aren't, couldn't we call essay writing busy work at worst and elective at best?


Essay writing shows that you actually know the domain material, that you are capable of holding onto a thought for more than 3 seconds and that you can communicate thoughts clearly to other people.

What’s an alternative way to show those things?


I never wrote a single essay for a math class, so I think there are alternatives.

But I think you're missing my point. If a student can go through their education never writing an essay the way everything thinks they're meant to do it and end up doing just fine in life, maybe the merits of essay writing aren't all what they're cracked up to be? Save the skill for specializations where the students are more motivated to learn it, like metalworking or pottery.


If argue that if you were ever asked to show your work, you were writing the equivalent of a mathematical essay. Maybe you never had to learn proofs, but I did.

“I didn’t have to do that and I turned out fine” isn’t a very rigorous pedagogy.


Sure, but now we're stretching the definition of "essay" past what's useful for the topic at hand. This is a thread about AI checkers for essays, not math proofs.

And no, it isn't rigorous, but it's a pretty good hint that you've got correlation and not causation. Perhaps it's worth entertaining the possibility there's a better way.


We started this subthread with my asking for a suggestion at an alternative. I’m open to hearing some!

IMO the most interesting reveal will be when we have humanoid teacher robots babysitting the children that will obviously never have jobs.


We don't write just to communicate things we know, but to learn what we think. Especially in a school setting.

https://herbertlui.net/dont-think-to-write-write-to-think/

I remember this every time I solve a problem while explaining it to someone in an email.


I'd argue that was not how writing was used in schools when I was growing up.


I kind of agree with this and am willing to take it even further: Why should I even study a subject that I can just ask a computer to explain to me when I need it? AI isn't quite there yet in terms of reliability, but there may be a point where it's as reliable as the calculator app, at which point, does it even make sense for me to study a subject just to get to the mastery level that is already matched by an AI system?

If I need to know when Abraham Lincoln was born or when the Berlin Wall fell, I could either 1. memorize it in high school history class to demonstrate some kind of "knowledge" of history, or 2. just look these things up or ask an AI when I need them. If the bar for "mastery" is at the level of what an AI can output, is it really mastery?


> Why should I even study a subject that I can just ask a computer to explain to me when I need it?

Because studying a thing is a world apart from having it explained. When you study a thing to gain understanding, your understanding is not only deeper but you are also learning and practicing essential skills that aren't directly related to the topic at hand.

If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.


This is, sadly, an idealized notion of education that just doesn't match the reality of a general ed classroom. Students don't study to gain understanding in a majority of their classes; they study to pass. True, not all students all the time, but in the world you just described no amount of extrinsic motivation can force a student to deeper understanding, so why are we even talking about AI checkers?

Unless you're telling me you never did that in any of your classes growing up, but I'm going to be highly dubious of such a claim.


> Unless you're telling me you never did that in any of your classes growing up

I did extremely poorly in school, actually. It wasn't an environment that I could function in at all. But I got a great education outside of school.

I'm really talking about what's needed in order to get a good education rather than anything school-specific. Technically, school itself isn't needed in order to get a great education. But you do want to get educated, whether school is a tool you employ to that end or not.

"If you want to get laid, go to college. If you want an education, go to the library." -- Frank Zappa

But, outside if reading, writing, and arithmetic, the thing I did learn in school that was the most valuable was how to learn. So, that's my bias. The most important thing you learn in school is how to learn, and much of what teachers are doing in the classroom is trying to teach that.

My fundamental point is that what we need in order to learn is not just getting answers to questions. That approach alone doesn't get you very far.


I don't think we're too far at odds. I think the difference is that I'm talking about the classroom...especially general education, where AI essays are the problem. To your point, not every student chooses to spend time at the library, and you can't make them.

When I was younger, I was a bit of an idealist about education reform. As I grew old, I began seeing this the failings of education as a reflection on human nature. Now, I just don't think we should be wasting student's time trying to make them do something that, for whatever reason, they cannot or will not do the way we want them to.


> If you just have a thing explained to you, you miss out on most of the learning benefit, and the understanding you end up with is shallow.

Sorry, but I don't get this. Isn't this exactly what the teachers/lecturers and books do - explain things?

Sure, you have to practice similar things to test yourself if you got everything right. And, of course, it's different for manual skills (e.g. knowing how to make food is kind of different from actually making food).

But a language model trained on a education materials is no different from a book with a very fancy index (save for LLM-specific issues, such as hallucinations), so I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.

And - yeah - figuring out if the answer is accurate requires knowledge.


> Isn't this exactly what the teachers/lecturers and books do - explain things?

In part, sure, but not solely. I wasn't saying that getting an explanation is a bad thing, I was saying that only getting an explanation doesn't advance your learning much.

> And, of course, it's different for manual skills

I don't think that's different. It's the same for intellectual skills as for manual in this regard.

> I fail to see the issue in ability to get answers for specific questions. As long as the answers are accurate, of course.

There's nothing wrong with getting answers to questions. But that's not the process that leads to learning anything other than the specific answers to those specific questions.

Getting an education is much, much more than that. What you are (or should be) learning goes far beyond whatever the subject of the class is. You're also learning how to learn, how to organize your thoughts, how to research, and how the topic works at a deep enough level that you can infer answers on it even when you've not been told what those answers are.

If what you're learning in class is just an compendium of facts that you can look up, you're missing out on the most valuable aspects of education.


Why lift weights when I could just use a forklift?

At some point someone actually has to do some thinking. It's hard to train your thinking if you just offload every simple task throughout your entire education.


So you're saying you've never used StackOverflow in your life?

I find your analogy works against your point, because manual labor does use a forklift and other heavy machinery whenever possible. It's better for human health (and the backs of blue collar workers) that way. Now the only people lifting weights in gyms are those who choose to be there for their health and not because they're forced to.


If you’re, say, not clear on whether Abraham Lincoln was president when the Berlin Wall fell, you might have trouble asking the AI a good question to begin with.


This is probably the best counterpoint in this whole thread


You need a base level of knowledge and skill in your head in order to do something of a higher level in your head.


This line of thinking will leave you like some of the high school kids my wife works with, who can't solve 19 + -1 without a calculator. If you don't integrate anything into your understanding, you will understand nothing.


> As a teacher ... tasks such as writing or programming

I hadn't realized programming was being taught pre-university. From a quick look online it seems high-schoolers may be learning Python. That's pretty cool. Wonder how widespread this is, and how early children are taught.


My kid is in a programming class, so far it has been... kind of a joke. Extremely controlled environments and dragging "code" around in the form of colored blocks.

I wish they taught JS/HTML web dev stuff. Even if someone only gets a year in the knowledge will stay relevant because they are using the internet daily. Just basic understanding of cookies, http, IP addresses, etc. is something kids should know.


Scratch is great. I used it when teaching and it allows kids to focus on the goal rather than deal with concepts like syntax errors and compilers. Sure, they can learn more later, but at the level of "computers follow instructions, you know?" it's a very appropriate tool.


oh yeah I wasn't bashing it, but my kid is a sophomore in high school. Thought there would be something more advanced at that age.


Eh, they might hit LabView in university. If they're really unlucky they'll end up in an industry that uses LabView in practice, and Scratch is really good prep for that pain.


So you've just completely dismissed Scratch/Blockly which has an illustrious and fairly respectable history. I'm not saying it's the be all and end all, but the way you tossed it aside with scare quotes felt a little... flippant?


Not sure your location or grade of your kid, but mine started on Scratch in school as well and in 6th grade they had started using Python.


In the Netherlands, programming in secondary education (ages 12-18) is not uncommon but also not widespread. If it's offered, it's not great. Like programming taught at universities, it's mostly people with a knack for it that actually improve.

For my msc thesis, which I finished 6 years ago, I researched how to teach programming to people without natural talent. I found a method with great results, and have yet to see anyone do something remotely similar. I'm not teasing a sale, all my stuff will be fully free. I've been working on it fulltime for the last three years, and I expect to have a 1.0 in two years, so let's say four years.


My high school had several years worth of programming courses available, and that was in the 2000s.

Now days American high schools are becoming much more focused on teaching technical skills again (thankfully). There's a renewed focus on teaching things like CNC machine programming and robotics alongside traditional computer science courses.


Hey there is a company examind.io that has a product that lets you assign writing assignments for students to do on their own time in their UI that runs in the browser, and it tracks every keystroke and interaction by the student and analyzes it to determine whether it's the student writing or if they are copy/pasting or transcribing it from AI. Also it has option to give student access to AI research assistant right in the software so you can inspect how the student is using AI to help with their work.

I think it might give you the assurance you need while giving the best experience and opportunity for your students to do honest / hight integrity work with the latest (AI) tools.

The benefit of forcing students to write their grades work in person is you know it's coming from them, but the downside is it's a very artificial test not representative or what real world work will be like. I think examind can give you the best of both.


i can use AI to generate text and then type it manually into that tool.


It evaluates the process the student used to write the essay, not only the final result. It makes it transparent to the professor. So the professor will see the student typed it out word for word. this is in contrast to an authentic essay writing process that involves a lot of editing.


There's also going to be that one kid who knows what they're doing and write a fake typer app for others. (I was that kind of kid...)


I don't feel assured at all. I don't want to bet on any horse in the volatile arms race between AI and anti-AI.


Re-reading my original reply and your response and I think we had a misunderstanding. I never intended to make you feel assured with my post. I was trying to communicate that the features the product provides could help you feel assured that the student actually completed the work themselves and if they used AI to help, you can see exactly how. (And that they appropriately paraphrased, etc)

The whole point of the product is to give professors more flexibility in the kind of assignments they use (and even allowing students to use LLMs in a controlled way and be evaluated in how they use them) while ensuring academic integrity.

For example allowing students to use LLMs as research assistants and even to help them consider and structure ideas, while ensuring the student paraphrases everything sufficiently to prove they actually understand it and can put it in their own words.

To be clear I understand and respect your desire to protect the integrity of the diplomas and credentials you are giving out (especially in contrast to the so many who let cheating run rampatnt), but at some point you may want to be able to accurately evaluate how students use industry-standard tools. (like when calculators were first introduced).

So sure, be skeptical, but maybe be careful about throwing the baby out with the bathwater.


I appreciate your taking the time to elaborate. I see you've responded to all comments on your original comment and remained civil despite people's negative sentiment. There are too few places on the internet left where such civility remains, and I thank you for contributing.

I would not feel assured of students actually completing the work themselves with _only_ something like examind.io as an extra measure. For it to be used in their own time, they would use it on their own hardware. As user viraptor pointed out, whenever there's anti-cheat software, someone is going to create targeted anti-anti-cheat software. That's what I meant by arms race.

For me to feel assured of students not fooling the anti-cheat software, for their input device they would have to use hardware controlled by me. It's not feasible to let them use hardware controlled by me in their own time.

I can see how a tool such as examind.io might help in accurately evaluating how students use other tools on a computer. For that they could use hardware controlled by me, during a test.


Their approach is to bring maximum transparency into the process the student used to write the essay, rather than the final result.

I don't really see how it's about an AI vs anti-ai arms race.

It's not my company and I'm not responsible for selling it so I'm probably doing a poor job...

But if you want to evaluate your students writing and ensure integrity and also provide them with longer windows to work on bigger writing assignments (and even allow them to use LLMs to help them write in accordance with your rules) then wouldn't an application like this help you?

I don't understand why I'm getting such a negative reaction from everyone for sharing this... I genuinely thought I was helping by pointing out a solution to your problem...


Assuming you're the founder, this is the type of BS comment that makes the rest of us hate AI founders.

It's vacuous, makes vague claims that don't leave room for proof/disproof, and doesn't offer any reason that it's any better than a prompt that asks GPT4o "was this generated by AI y/n"


I am not the founder. Also I pointed out they track user actions and keystrokes and analyze them, which is clearly distinct from just pasting the students work into an LLM and asking if it was generated by AI. I'll go further to say that they can tell if the student left the browser window and for how long. Also natural essay writing patterns are different than transcribing something from another source. I'm pretty sure they have more methods but I don't remember them all.

Your comment is full of unnecessary animosity and resentment and I think it is inappropriate.

I know the founders and I also know they have very happy customers.

Just because you have experienced some vapor ware startups or whatever doesn't mean every company is one...

I shared it with the OP because it sounds like they care about the problem the product solves.

Lastly, I want you remind you that the very first line in the community guidelines is "Be kind. Don't be snarky". I find your comment both unkind and snarky.

https://news.ycombinator.com/newsguidelines.html


I submitted several scientific/research papers my wife wrote or co-authored in the late 90s early 2000s and they were all flagged as AI written by multiple tools. It was good for a laugh.


That's just more evidence that the Kaylonian robots have already infiltrated our social and familial ranks. /s


If you put Shakespeare or the Constitution through one of those, they will say 90%+ AI. Any student who gets dinged by one of those things just needs to show this result until the administration caves and bans the stupid checkers.

They may have to go to the local press or a board meeting to get that result though.


Do you have a source on Shakespeare being flagged? I just tested first few lines of Hamlet with a random AI detector[1] and got a result of 100% human.

I saw a post saying that the opposite happens, i.e. AI output, when prompted to write like Shakespeare, can beat AI detectors[2].

[1]: https://quillbot.com/ai-content-detector and https://www.zerogpt.com

[2]: https://old.reddit.com/r/freelanceWriters/comments/1amngnv/i...


I don't have a direct link. It came from a reddit post by a professor who was trying to convince his department to stop using them by putting famous works through to show how bad it is.

But my guess is that it works both ways depending on the cheating detector and the prompt used.


The most effective way to get them to stop is to put the professor's work through...


So that’s how Shakespeare did it…


Tests should not be a tool for authenticating learning. They were NEVER good tools for this and now its clearer why. Tests should be used for a teacher to help a student figure out where they need to focus.

It should be like getting a blood test from a doctor unless they are testing for drugs why would you try to cheat with someone elses blood?

We don't need to figure out how to sidestep the fundamental flaws with our institutional education we need to find and fix the root cause.

Being "autheticated" as "knowing stuff" (your degree and GPA) doesn't translate into companies being able to tell if you can do a job. It didnt before AI and it doesn't now.

Everyone should stop and think about how dumb it is that children and adults are trying to cheat at learning. You can't cheat learning. So why do people try and succeed? Because we arent actually asking for learning we are asking for something else.


This is an expected consequence of using ML to detect AI. LLMs are more well-read than the average person, so they'll have a bigger vocabulary than the average person, so the ML model will learn to associate a bigger vocabulary with a higher probability of being AI. Which unfairly penalises well-read humans, who also have a bigger vocabulary than average.

Personally I think the whole endeavour is a waste of energy. Unless there's a massive civilisational collapse, humans will always have LLMs as writing aides, so trying to teach them to write without one is teaching them a skill they'll never need to use.

If you want to teach them to reason on the spot, then go back to requiring them to orally put forth and defend an idea.


> If you want to teach them to reason on the spot, then go back to requiring them to orally put forth and defend an idea.

I have been thinking for the last couple years that we may end up in that spot. It is more labor intensive, and perhaps a little more susceptible to bias, but pretty hard to fake. At the end of the term, each student has half an hour with the professor (or a TA) where they discuss the content of the class. It should be quickly apparent who knows the material well enough and who skipped too many lectures.


Oh god, this really does capture my problem. My university demands 40 page reports that are never read by anyone, so my seniors used ChatGPT to generate the needed drivel needed, so now we have the ai detectors, and are told that the % of AI content as flagged by a tool (turnitin) must be below 10%.

Unfortunately, the way I write, (like any well educated Indian) is a rather verbose, formal and rather florid in it's manner, and with no fewer than 3 conjuctives (when I don't want the reader to understand what I just wrote) per sentence. This style is for good or bad understood by turnitin as being AI generated.

So, I have had to unfortunately simplify and write in a more stilted manner, just to be able to submit my assignment.


> when I don't want the reader to understand what I just wrote

Unless you are writing poetry or art (where ocasionally this makes sense, to avoid censorship for instance, or to make a point), that's just bad writing. Should you get penalized for plagiarism? No. Should you get bad grades for writing like this? Probably yes.


It only matters when someone actually reads it no?

If a tree falls and nobody heard it, did it actually fall?

Also relevant, this is for my final year engineering project, not a writing course.

The report that I write is not even read by those who grade my project, rather, the page count is checked, and a small amount of sanity-check-reading to make sure I have not just filled the pages with cinema songs is done. My grade is decided by my relationship with the panel, nothing else, not the quality of work done, not the idea, not the deliverables shown.

I am absolutely sick and tired of this.

I just wanted to do a good job, and write good code, but no, that does not move the needle one bit with our panel of reviewers.

(Pardon my rant, but I really did want to do a great job with my project, but my university did not want to reward it)

At any rate, I can get into a whole rant about incompetent lecturers at many Indian universities, but that is way off topic.


Train ChatGPT to avoid being flagged by turnitin? (Then write a paper on it in a way designed to trip said AI detector)


Been there, tried that.

It only sorta worked, but failed basis m basic "formalism" tests. So could not be directly used.

The writing style and vocabulary expected by Indian universities is a style of English that has not been used in England for 80 or so years. It is uniquely frozen in time, florid, and all together horrid. (I had to make it rhyme, sorry)


There will eventually be a great lawsuit from people have been disenfranchised by these BS 'AI checkers'. Someone will sit down in court and write essays from scratch on a random topic, which will be flagged as AI-generated, leading to general mockery and the diminution of the stock toward zero.


Writing teachers have a lot to learn from Math teachers, who have had to fight against the crutch of calculators for decades. The future here is obvious. More in-class tests to evaluate writing ability instead take home is what will happen. For take home, require "showing your work" and the research that went into writing. Show the steps.

Yes, you could still use AI for any take home work, but I think what forces students to AI isn't the lack of will to do the work. It's the "I have a 5 page paper due tomorrow and I haven't started" cliff.

I mean, writing education is messed up in so many ways, but if I'm being realistic this is the path forward.


Seriously, the answer is to have the students turn essays in using stages. First stage is to write on paper by hand, even if there’s not a lot of research or facts yet. Lay out the argument first using what they remember from the initial research, then look it over with the kid and have them explain it a bit. Then, they do the next stage which is to do closer reading and research, and fill in the gaps. Finally, they polish the essay and turn in the final.

It’s hard to use AI to do this, and if a student did use AI effectively like this, they’ll still learn a lot about the topics, because they’re forced to think about it.

Imagine, rather, that you have these kids actively use AI to help them write an essay, but make them provide the transcripts so you can so how they think and use the tool. Perhaps even sit with them and use the AI tools and show how they can be used as a guide, as long as you don’t take the responses as facts like in an encyclopedia, but rather use AI like a parent who is getting but doesn’t quite remember the details. You’d not trust it implicitly but just use as a guide. Perhaps someone should make an AI that isn’t so sure of itself, and suggests ways to research things yourself but points you towards the closest cardinal direction to your question?


This scares me as a serial procrastinator.


Me too, but as an adult now I see the value in teaching or even forcing kids to not procrastinate :) I still do it, but I know in my heart that I'll hate myself for it later!


How can we claim any high ground when we are handing off the need to check for plagiarism to a tool that in effect is the same as the student using it to create plagiarized works.

It doesn't even work, its non deterministic, and most importantly - if the teacher is familiar with the student's writing it should be pretty obvious if this is a fake or not based on something they can be observed doing.


There is a certain style to LLM-created writing. It tends to float in the air, lightly tethered to real world examples. Unfortunately, that's also what passes for academic style in some quarters. I mentioned that a few days ago here.[1]

LLMs have grammatically correct blithering nailed. One amusing result is that Derrida and his followers now sound like an LLM with the temperature medium high.

Maybe we should be teaching kids to not write like that.

Mandatory XKCD.[2]

[1] https://news.ycombinator.com/item?id=41419645

[2] https://xkcd.com/451/


Interesting piece.

I really wonder why these tools even show detection percentages when they’re meaninglessly low. A 17% result from a classifier model is totally meaningless. You could show a well-trained animal classifier the first Google Images result for “cat” and it would still say there’s a 17% chance it’s a cow. Showing low-likelihood predictions will do nothing but freak people out and confuse them.


Interesting that my childhood evaluation systems in India are resistant to this new development. 100% scores from in-person exams, no loss of dynamic range through grading (i.e. 97% score > 96% score), zero score to homework. I imagine that the primary constraint in the US is labour cost since you need what you call a proctor and what we call an invigilator, and perhaps the substantial student power (a student and their parents can overrule teachers).

I like the relative mobility in the Indian system since anyone can swot to the test, but clearly the US system is much more enjoyable for a wider variety of students and they have a wider range of skills.

I think I'd like for my children to have a composite of the two. I dislike neverending homework and love tests because spiky activity allows for lots of self-directed exploration and the tests allow for evaluation. Though it's like that this is because I, personally, am quite capable with tests: I operate very well under time pressure and have an exceptional memory.


Is the relative mobility a good thing though? (Not strictly on topic, so apologies)

I would argue that for the exams that actually matter (the JEE, say) PCM scores are very poor indicators of success in the chosen branch, and are an arbitrary selection criteria, and not very removed from a lottery.

A better system may include psychometric pairing for branch of engineering opted?

My argument is essentially that if I like (say) Civil engineering more, then I would have demonstrated some additional knowledge in parts of the desired branch, so testing that should in theory give better civil engineering graduates, instead of people who joined it for the tag of the institute and not for the branch?

But again, I am a _bit_ salty about the 12th standard exam treadmill in india.


Yes, it's practically a lottery with a skill floor. And I think that's a pretty good way of doing things. The purpose of these technical institutes is to reliably train people into a bunch of skills, not to get people who have the skills already and stamp them. The purpose of an entrance exam is to have a method that:

- demonstrates a floor of intelligence and skill

- is demonstrably fair

- has high dynamic range

- can be trained against by the conscientious

I think the JEE behaves well against these criteria. In addition, affirmative action is explicitly handled by quotas. This gives a predictable system with known constraints. When participating, you know your targets. Personally, if I know my constraints ahead of time, and I can optimize to them by studying, and the studying is directionally useful to education, that is sufficient to me. If I fail, it's on me and I can accept that.


Not saying it is good or bad, but is someone who loves a subject more deserving of a seat in that subject, than someone who is looking to minmax their salary package at the end?

I have batch mates who hate CS, but chose it simply for the placements, and wholeheartedly believe that they are a wasted seat, because they are not interested in learning the subject.


It's too hard to pick the deserving. It's probably better at the societal level to pick something that is clear and easy for candidates to follow and optimize against. And I think Indian selection systems are far better at that than the US. I wouldn't worry too much if I were you. If you pick EE/Mech and come to the US for a CS Master's you'll be fine. I can't speak for India itself.


OK so back then using SAT words was good, and now using SAT words is bad. Got it.


Ubicomps colonial impulse, back for round 2.


AI checkers need to be sued out of existence.


Sued for what exactly? Like what law or regulation are they breaking?


False advertising and misrepresentation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: