Hacker News new | past | comments | ask | show | jobs | submit login

It is trivially easy to spot AI writing if you are familiar with it, but if it requires failing most of the class for turning in LLM generated material, I think we are going to find that abolishing graded homework is the only tenable solution.

The student's job is not to do everything the teacher says, it is to get through schooling somewhat intact and ready for their future. The sad fact is that many things we were forced to do in school were not helpful at all, and only existed because the teachers thought it was, or for no real reason at all.

Pretending that pedagogy has established and verified methodology that will result in a completely developed student, if only the student did the work as prescribed, is quite silly.

Teaching evolves with technology like every other part of society and it may come out worse or it may come out better, but I don't want to go back fountain pens and slide rules and I think in 20 years this generation won't look back on their education thinking they got a worse one than we did because they could cheat easier.




As a (senior) lecturer in a university, I’m with you on most of what you wrote. The truth is that every teacher must immediately think: if any of their assignments or examinations involve something that could potentially be GPT-generated, it will be GPT-generated. It might be easy to spot such a thing, but you’ll be spending hours writing feedback while sifting through the rivers of meaningless artificially-generated text your students will submit.

Personally what I’m doing is to push the weight back at the students. Every submission now requires a 5-minute presentation with an argumentation/defense against me as an opponent. Anyway it would take me around 10-15 min to correct their submission, so we’re just doing it together now.


A genuine question, have you evaluated AI for marking written work?

I'm not an educator, but it seems to me like gippity would be better at analyzing a students paper than writing it in the first place.

Your prompt could provide the AI the marking criteria, or the rubric, and have it summarize how well the paper hits the important points.


Never say never, but I do not plan on doing this. This sounds quite surreal: a loop where the students pretend to learn and I pretend to teach? I would… hm… I’ve never heard of such… I mean, this is definitely not how it is in reality… right…

(Jokes aside, I have an unhealthy, unstoppable need to feel proud of my work, so no I won’t do that. For now…)


I would have thought that the teaching comes before the test, and that the test is really just a way to measure how well the student soaked up the knowledge.

You could take pride in a well crafted technology that could mark an assignment and provide feedback in far more detail that you yourself could ever provide given time constraints.

I asked my partner about it last night, she teaches at ANU and she made some joke about how variable the quality of tutor marking is. At least the AI would be impartial and consistent.

I have no idea how well an AI can assess a paper against a rubric. Might be a complete waist of time, but if there were some teachers out there who wanted to do some tests, I would be interested in helping set up the tests and evaluating the results.


In discussing how to adapt teaching methods, we have also looked at evaluation by LLM. The most talked about concern now is the unreliability of LLM output. However, say that in the future, accuracy of LLMs improves to the point that it is no longer a problem. Would it then be good to have evaluation by LLM?

I would say generally not, for two reasons. First, the teacher needs to know how the student is developing. To get a thorough understanding takes working through the student's output, not just checking a summary score. Second, the teacher needs to provide selective feedback, to focus student attention on the most important areas needing development. This requires knowledge of the goals of the teacher and the developmental history of the student.

I won't argue that LLM evaluation could never be applied usefully. If the task to be evaluated is simple and the skills to be learned are straightforward, I imagine that it could benefit the students of some grossly overloaded teacher.


I know I would have had a blast finding ways to direct the model into giving me top scores by manipulating it through the submitted text. I think that without a bespoke model that has been vetted, is supervised, and is constrained, you are going to end up with some interesting results running classwork through a language model for grading.


Does pedagogy have established and verified methodology that will result in a completely PHYSICALLY developed student, if only the student does the EXERCISE as prescribed? No, but we still see the value in physical activity to promote healthy development.

> many things we were forced to do in school were not helpful at all

I've never had to do push-ups since leaving school. It was a completely useless skill to spend time on. Gym class should have focused on lifting bags of groceries or other marketable skill.


Things like gym classes are often justified based on "teaching healthy habits". That doesn't appear to work. You did stop the pushups. So did I.

Which doesn't mean I'm against physical activities in school, but educationally, it does appear to be a failure.


But at the time it did contribute at least somewhat to your physical condition, I am not an expert, but physical condition indicators like VO2 max seem to be the best predictors of intelligence. We're all physical beings at the end of the day


Sure, but you're creating a new goal post, which is entirely different than the stated justification.


You haven't proved that it made a difference or that doing something else wouldn't have been as or more effective, which is my point. You did it, so these students must do it, with no other rationale than that.


You have forgotten that pedagogy is based on science and research. That is why it is effective for the masses. Anecdotal evidence will never refute the result. Take learning to read, for example. While you can learn to read in a number of ways, some of which are quite unusual, such as memorising the whole picture book and its sound, research has clearly shown that using the phonics approach is the most effective. Or take maths. It's obvious that some people are good at maths, even if they don't seem to do much work. But research has shown time and time again that to be good at maths you need to practice, including doing homework.

So learning to recognise the phonics and blend them together may not be better for one pupil, but it is clearly better for most. This is what the curriculum and most teachers' classroom practice is all about.


"Although some studies have shown various gains in achievement (Marzano & Pickering, 2007), the relationship between academic achievement and homework is so unclear (Cooper & Valentine, 2001) that using research to definitively state that homework is effective in all contexts is presumptive."

American Secondary Education 45(2) Spring 2017 Examining Homework Bennett


To conclude for yourself, just compare the PISA result of the US and other developing country like VietNam or China, where they still keep the school tradition of homework and practice alive. And what do we see? Much higher PISA scores in math than the US. I refuse to believe some some folks have a "math gene" and other not.

Practice does not improve skills? You got to be kidding me! I didn't state that homework is effective in all contexts, but I firmly believe that practice is absolute necessary to improve any kind of skill. Some forms of practice is more effective than other in certain context. But you need practice to improve your skills. Otherwise, how do you propose to improve your skills? Dreaming?


About a decade ago, it was a hot fashion in Education schools to argue that homework did not promote skill development. I don't know if that's still the case, as fashions in Education can change abruptly. But consider what this position means. They are saying "practice does not improve skill", which goes completely against the past century or so of research in psychology.

If your field depends on underpowered studies run by people with marginal understanding of statistics, you can gather support for any absurd position.


> They are saying "practice does not improve skill", which goes completely against the past century or so of research in psychology.

You haven't made the argument that what they are practicing is valuable or effective.

I'm sure they get better at doing homework by doing a lot of homework, but do they develop any transferable skills?

It seems you are begging the question here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: