Threads like these are always frustrating, because as usual people (programmers in this case) freely air their opinions on how schools are broken with phrases like “we need to fix the system”.
As someone who studied pedagogy for years and quit due to an immense frustration with exactly this — how broken the system is — I would encourage you to entertain the thought that maybe, you as a person who is almost in all cases not a teacher, nor someone with any experience apart from once having been a student, do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues.
People love to rant about how bad tests are. “We just study for the tests” and so on. And yet this complaint seems to be international. Curious, isn’t it, how all these systems seem to fail in the same way?
In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?
These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.
So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Cynically, this will never happen because reforms to battle educational issues in any democratic society usually takes more than 5 election cycles to show obvious results (and when the bad results start stacking up current leaders will take the flak regardless).
> HOW do you measure knowledge? And when you decide how, how do you scale it?
I have experienced good tests and bad tests. I studied in France, tests were open book with no multiple choice questions, only problems to solve. This approach scales badly and is a lot of work for the professor grading but it measures knowledge.
The problems were long, had few questions besides describing the problem and maybe a few questions to guide the student along the path to solving it. We had either 3 or 4 hours to solve those problems.
Those tests worked very well. I'd come out from one of those tests having often learned something new.
I was an exchange student in the US, tests involved multiple choice questions, they were closed books with questions around rote memory. While I did feel that some of the education in the US was valuable and interesting, I hated those tests, they didn't correlate as much with comprehension of the subject matter and more with learning facts that are more or less tangentially related to the subject matter. I still remember in a computer graphics tests being shocked by being asked when Opengl was first released, which companies were involved and other completely useless knowledge.
What's interesting to me is that there's much less opportunities to cheat with the former tests while the later tests are pretty much made for cheating. So, imho cheating is a symptom of bad tests.
I don't know if it's popular in France, but another very simple idea that eliminates cheating entirely is oral exams. They're still done a lot in Italy. I once literally inverted a binary tree on a whiteboard :)
IMO oral exams and open-ended answers are the kinda things that really work better for their intended purpose, and everyone knows. But people still prefer multiple-choice because "they scale". The goal isn't simply measuring knowledge, it's doing so in an acceptable/shitty way, with (edit) limited resources.
As a TA we did something similar: we asked them to self grade their own homework using a provided rubric, and then we spot checked 1/4 of the students (without replacement) to punish lying about what grade you deserve. We didn’t punish for a few disagreements over the rubric, but if it was blatant we checked their assignments every time in the future (and told them). I think if it was bad enough we could have reported them.
This saved a bunch of time on actually grading assignments and made us write a very clear and unambiguous rubric (which required a very clear homework) and also demonstrated to the students that grading was not arbitrary.
Several universities [1] scale out personalized instruction and interactive grading by hiring students from previous cohorts and paying them either in course credit (taking a "course" that involves teaching students in the current cohort) or at a low rate (possibly subsidized by financial aid) comparable to other on-campus student jobs.
How do you justify the fact that only some of the students get the pleasure of an in-person grilling? Or, am I completely misunderstanding the process you're going to be using?
In my plan, each student is interviewed at least once. Ideally more than once by the same teacher, so the teacher can get to know them a little better, spot areas where the student needs more help, etc.
There's still a scaling problem, but I think it makes the ~200 student classes we have now more feasible than 100% autograding. I also like the other commenter's suggestion of coming back to interview certain students each time, if they need it.
Is this about pleasure or about measuring knowledge?
A lot of stuff you learn and the way you learn it isn't necessarily pleasant, but frequently you still have to do it and you really discover 20 years later why it was needed.
No, it's about why only a subset of students get singled out for extra scrutiny, literally arbitrarily, as the selection procedure itself is defined as "random sampling."
random sampling is an effective method for inferring the same information about the larger population that is being measured in the smaller sample, to a certain degree of confidence based on the sample size and known distribution of what is being measured. These concepts are fundamental to statistics.
In college, viva-voce is a significant part of non-theory exams. It’s another matter it was not run well by many colleges but I always loved those chit chat sessions with some of the good professors. Some professors treat it like a boring Q&A which reduces its effectiveness.
I think you might be who the top response is responding to. You seem to have inside knowledge that saving money is the top priority without considering any real-world resource constraints.
The top response is the one that brought the constraint of "scale" into this discussion, and that's what I'm addressing. Maybe you should bring your objections to them rather than to me. "Real-world resource constraints" is just a euphemism for "wanting to save money" in this case. I'll edit it to clarify that I mean the same.
And I'm not passing judgement on the choice made, nor saying the constraints aren't there, nor saying anyone should do anything different. I'm just pointing out that the scalability constraint will affect the test possibilities, which will affect the quality of the measurement. Feel free to disagree with this all you want.
EDIT: Also, I do happen to have some inside knowledge by having worked in higher education for about a decade, starting in the mid 00s. Coincidentally, most of my work was on cost-saving measurements, designing a few algorithms that allowed universities to reduce their teacher headcount (first at a university, then at a software vendor), so yes, the #1 goal there was saving money. But I don't think having done this affects my answer, nor I do think that I deserve special treatment. I'm merely answering to a chain of comments.
Wanting to save money also falls under availability of staffing trained to do this. Considerations of if the massive increase of expense and diversion of people from other economic endeavors is worthwhile.
Good hunch, but in my experience, availability of trained staff was never really an issue in practice. Hiring well trained university faculty was always purely an economical problem. Universities often already have a trained surplus of faculty employees working in a highly reduced capacity. Especially in the last 10-15 years where distance learning became commonplace, a lot of faculty was replaced by low-paid part-time quasi-teachers, which would be more than happy to be offered a permanent position. To further demonstrate that this is an economical problem: those quasi-teachers often have different job titles other than "teacher", depending on the jurisdiction, in order to evade laws and evade the reach of (often very powerful) faculty unions.
Oral exams have an entire other bunch of issues.
Just looking at the professor side, beside time, I imagine it would be very difficult for to grade with same meter an arrogant student, a dismissive one, a smelly one, an eloquent one, or even the first and the last one in the same session.
... a male student, a female student, an attractive student ...
And yes, this is actually a well-known problem in Italy - with (typically male) professors being routinely accused (and occasionally convicted) of favouring attractive (and typically female) students.
I don't agree with this. They have different failure modes, but I believe that in aggregate an oral exam affords the candidate the fairer shot, given the minimal assumption that the professor is in good faith.
If I say something imprecisely or if I make a non-fundamental mistake, an oral setting gives me the chance to correct myself and prove to the examinator that I have a strong grasp of the material regardless.
Written exams, especially multiple choice and closed-answer quizzes reward people who regurgitate the notes, oral exams and written long-form open questions reward actual knowledge.
Of course the "better" methods require a greater time investment, and I can't really blame professors who choose not to employ them. But it's quite clearly a tradeoff.
> If I say something imprecisely or if I make a non-fundamental mistake, an oral setting gives me the chance to correct myself and prove to the examinator that I have a strong grasp of the material regardless
This is just even further proving the point, which is that in an oral context this means that the animosity of the examiner is much more significant than in a written one, which by definition implies that the oral one cannot be fairer than the written one.
You yourself are saying that you "have the chance to correct yourself". This is either because you will self-correct yourself on recognizing a specific (perhaps subconscious) face or gesture from the examiner, or because the examiner will directly tell you that you are wrong. Both cases present ample opportunity for unfair discrimination. In the first case, perhaps a person is less skilled at reading people, or perhaps the examiner just has a better poker face. In the second case, you are now at the whim of the examiner to decide based on your body language whether "you are making a non-fundamental mistake and deserve a second chance" or just "have no idea of the material and don't deserve a second chance". And, compared to the written exam, there is absolutely no record of the context that drew the examiner to such conclusion -- which is also kind of important, since evidently the written exam is also subject to some discrimination.
Nobody expects you to be 100% on point, it's just impossible; it's not like the spoken variant of a written exam. The kind of "correction" I mean is more along the lines of what would happen during a normal conversation. Imagine I was asked to write a recursive algorithm and I forgot the base case. It's not a fundamental mistake, but the professor might interject to make sure I actually know about termination, inductive sets, etc., which is actually great if you understand the material deeply, because it gives you a chance to prove that you actually just forgot.
Obviously this is assuming good faith by the examiner, but if you aren't willing to assume that, there aren't very many examination formats that are going to work very well.
Is not a question about good faith or not. He may be showing completely unintentional bias. But the point is that the oral one gives you a shitton more opportunities to play that bias. If you even try to say that the oral exam is just "a normal informal conversation" rather than something following a very strict protocol you might as well just give up any appearance of fairness. How much role bias would play on such a conversation is just outside the scale.
It's not the examiner deciding "you deserve a second chance or not". In a normal oral exam everyone gets a "I don't think that's correct" or "please explain that to me" kind of response on a wrong answer. They don't silently scribble a note to distract a point from your score or something like that.
How you deal with that is really where your score comes from. Because if you know what you're talking about you'll correct it and while doing so show that you know a lot of related things. While if you have no idea you can't guess yourself out of that type of question.
I don’t know. For example, in music examination, the outcomes change drastically if you blind the examiner from seeing the student or knowing their name. Unless you see something different in the world of music, I’d say the examination is happening at the same level of “good faith”ness.
How would you blind oral examination so that the examiner is unable to distinguish the student’s gender/race/identity?
> For example, in music examination, the outcomes change drastically if you blind the examiner from seeing the student or knowing their name.
FWIW, the study that "proved" that appears to have been a pretty bad study. So, in reality, no: people are not terribly prejudiced, and things don't change significantly when you blind the examination.
All students in a class cannot take an oral exam simultaneously. This means that either:
* everyone gets the same questions meaning later students can cheat by asking earlier students what was on the test, or
* everyone gets different questions meaning much more effort to design the exam and big risks that some students will get easier questions and others will get harder questions
Most of the oral tests I have taken have the questions posted by the lecturer before the exam? I don't understand why it would be a problem for students to say what the question was
> I don't know if it's popular in France, but another very simple idea that eliminates cheating entirely is oral exams.
As an introvert, I am very happy not to have had too many oral exams during my studies (in France ;) ). I think I agree with you in principle, but to me that would have been torture.
You get used to it. I've had the typical weekly oral exam during 2 years in the French "classe prépa", and it was torture at first. I can definitely say that it changed me, made me less stressed about these kind of situation, even years later at work.
I was a student in France and during the two years of high schools I had a bunch of blackboard exams and yeah, you kinda have to learn the material. Of course it also helps to be confortable in such situations, but we had enough of them to get trained in that
You had enough of them to get trained in that. And it might have taken you just few enough to get comfortable for it not to affect your grades in such a way that you dropped out. I had a friend in university that would just completely fall apart in any kind of such situation, even when it wasn't for an exam and even when it was a group presentation setting and it wasn't just him up there. Written exams were completely fine though. Did he not deserve to get a CS degree and just work in some company where he doesn't have to become a team lead or architect where he'd need to speak and present and instead steadily and happily work in his corner, talk to his immediate peers and crank out solutions?
I'm going to go out on a limb here and say that presenting to other humans is (a) a skill that most people can learn (to at least "acceptable" levels of proficiency) & (b) a skill that most people should learn, because it's a huge part of working in the field.
I understand it's incredibly uncomfortable.
I'm a pretty serious introvert and got the shakes and sweat dripping off my hands the first few times I did it. But with exposure and effort to self-improve, it's doable. I didn't like it, but I'm incredibly thankful I was forced to work on it.
Ah yes, the good old fallacy: "I could do it, so it's doable". It's doable by you. That doesn't mean it's doable by someone who is not you, even though they might be otherwise deserving.
It's like LeBron James saying "I learned to dunk, so anybody can dunk!" - but basketball is not just about dunking, and not everyone is LeBron.
Talking to other people is not dunking a basketball 3m into the air.
Frankly, I've been in oral exams, in Romania they're (were?) part of a national exam at the end of highscool. You just have to practice.
If hundreds of thousands of highschoolers in a rather poor country could figure out how to do it (any generally not flunk due to the oral part), for sure university students can do it.
Anyone not able to do it will not really be able to pass any interview, persuade peers that their idea is good, etc.
I've been in plenty of oral exams too, in Italy. That doesn't mean I ever enjoyed them or felt they did me justice.
> Anyone not able to do it will not really be able to pass any interview, persuade peers that their idea is good, etc.
I strongly disagree there. Orals are a situation of complete knowledge and power imbalance between two parties. That is not the case when it comes to persuasion.
As for interviews - yeah, they are similar, and that's why interviews also are seen as very problematic. A lot of people who can be perfectly productive in day-to-day situations, simply don't do well in interviews. We should be striving to correct that, not accept it as inevitable.
I think the way those examinations were set up helped a lot in getting confortable (or at least good enough): like it was a weekly event, just three students and the teacher in one room, each student working on its own question(s); the teachers were more or less helpful, but most would guide us along, not leaving us stuck at our blackboard for the whole duration.
But if one, even if those situation really can't do it, they'd have to switch to a course/class without any oral examination to get their degrees, but I think it's way better to learn as a student than as a professional (and yes, like the sibling comment, I think most people _can_ learn to an acceptable degree)
> work in his corner, talk to his immediate peers and crank out solutions
I think you should quote more of that sentence and then I can say that yes, definitely these do exist:
where he doesn't have to become a team lead or architect where he'd need to speak and present and instead steadily and happily work in his corner, talk to his immediate peers and crank out solutions?
Yes, companies exist, which do not push you out just because you have found your sweet spot of what you can do and are OK with. Of course we're not talking FAANG here and in general I would assume that HN clientele is skewed towards working in companies where this is not possible. However, I can tell you that I've worked at companies personally way back in the past in which I met many such employees that had been in those companies for quite some time.
The big thing here being "talk to his immediate peers". The guy I was describing was completely fine working w/ us, his friends. Put him in front of an audience and he's got a problem. Of course it'd be hard to get a job in the first place, but a lot of places also did exist at least back then where no coding (neither take home, nor whiteboard) were part of the hiring process. Of course you won't make that guy a consultant at Accenture, he's gonna fall apart.
There's only so many issues someone can have until people in general will decide to give them a "fuck off, I don't care" treatment.
You don't have that for verbal communication with other people, but I'm sure if digged far enough you would have the same reaction to something else that other people think is acceptable.
Just how accommodating should the standard test be?
If the answer is "infinitely", I think you won't find any test that satisfies it
I studied engineering in Italy and all my exams were both written (with exercises, multiple choice didn't exist at all) and oral. No way you could cheat or not engage with the materials.
After a class on data structures and algorithms, a white board interview asking you to invert a binary tree is very different from the same interview when you apply for a job.
The only thing they have in common is "assessing". An exam for a course seeks to assess mastery of the subject matter of the course. An interview for a job seeks to assess skills / aptitude for a particular job.
This.
Moreover, an exam for a course is, to some extent, an assessment on how the course was delivered. And an interview for a job has a much larger scope.
I had an electrical engineering final as an in person oral exam. One question. One hour to solve on whiteboard. It was a hard class to begin with and I got a hard question. I did well, but definitely expected to fail.
Totally agree.
I might a bit partial to that, because I tend to underperform multiple choice tests for overthinking, but I've really the impression that open ended questions test knoledge much better and make it more difficult to cheat.
Beside that, having almost nothing to do with cheating, another good thing in the French system is the continuous grading: labs were graded, projects were graded, small intermediate tests were graded, so you really do not study for just the exam (actually often you do not study at all for the exam).
(beware: my experience is limited to a single grande école I attended).
I went to INSA in the early aughts and we didn't really have continuous grading, labs (TPs) were graded but the biggest part of the grades (les partiels - exam week) happened twice a year (or 4 times a year during the first two years of prépa intégré).
I do know that since then they've moved to a continuous grading system. I'm not sure if that's the same with other grande écoles but I do know that my friends in other grand écoles had a similar system of 2-4 partiels a year.
I'm currently grading an open book test as you describe. It turns out that someone put their attempt at answers on chegg.com shortly after I posted the test. The temptation to use chegg is too great for students to resist. When chegg has the wrong solution (which is often the case), students will doubt themselves and will go with the wrong chegg answer.
To be clear, the only goal of chegg.com is to help students cheat. The world would be a better place if chegg and its copies did not exist.
My solution to this is to use version control and have them record an explanation of their work. If they copy from chegg they also have to forge a commit history, as well as explain the code line by line. I’d like to see them do that without learning anything.
I suspect that a "certificate of course completion" (or, if you prefer, "a course grade") does not actually requiring comparing individuals A and B.
It does, however, require gauging that individual X, for any individual X who have taken the course, said individual X have acquired enough knowledge to consider "having passed".
Anything beyond "pass/fail" is merely trying to stack-rack students and impose un-needed competition. But it is good for the gamification of knowledge acquisition, so perhaps not entirely bad.
Yeah I came from uk undergrad to us grad school and was shocked to see that even some advanced undergrad classes, with grad students in them, were tested by infantile multiple choice questions (this was at harvard). It almost makes one wonder whether the us dominates academia to the extent it does because of the foreign influx.
> I was an exchange student in the US, tests involved multiple choice questions, they were closed books with questions around rote memory.
As a US citizen, many tests were open book essay style, especially once we got to college.
In public school however, lots of "standardized" multiple choice tests that were used to grade the school. Some of those tests also include an essay portion.
Teachers in the US aren't paid to do grading, they typically do it at home in their own time, thus very few essay style tests.
I never gave a fuck about grades. For some courses I had below average marks, for others I was the best.
I studied because I wanted to genuinely understand how things work and how I can solve problems and how, beginning with one ideea I can extend it or come up with an entirely different idea.
And I know I will get downvoted for saying this, but for me, programming without having a solid understanding of CS background and how computers work it would make me just a code monkey, able just to do what I saw in tutorials and copy pasting SO answers without understanding them. Which can be fine, lower level work is ok and highly needed. But wouldn't make me as good as someone who knows his stuff from a to z.
I hate it when I hear someone considering himself a programmer after he modified a WordPress theme or did a 3 weeks boot camp.
Why should this field be hold to much lower standards than medicine, physics, math, chemistry?
I never heard someone bragging that he is a doctor after watching YouTube videos, which happens often with writing and architecting software.
> I studied because I wanted to genuinely understand how things work and how I can solve problems and how, beginning with one idea I can extend it or come up with an entirely different idea.
> programming without having a solid understanding of CS background and how computers work it would make me just a code monkey, able just to do what I saw in tutorials and copy pasting SO answers without understanding them
Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.
> Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.
Yes, except ...
University courses are always going to have a mixture of people with different motives. One of those motives is going to be a desire to go into research, while another is going to be earning credentials to prove they have an understanding of their field before they embark upon their career. Then there are the people who take the courses with a pure desire to learn how things either in a structured environment or to learn along with like-minded people, without pursuing it as a career path. That runs the gamut from needing university credentials, to the credentials being nice to have, to not needing the credentials at all. Yet, in each case, the desire to learn is genuine.
Then there are the people who are cheating the system by treating the university as a credential mill, the means to an end, where the end has little to do with furthering their understanding. Some of them are upfront about this. I remember one of my high school peers saying they chose their discipline based upon how much money they would earn and how well they would perform. Some choose to deceive themselves about this, largely by griping about how poor instruction is or how irrelevant the course material is while putting little effort into the courses. Whichever way you look at it, these people are problematic when they step over the line by cheating. They go from being passive leeches on the system to actively destructive forces.
There are genuine advantages to learning in a university environment. For some it is structure. For some it is being able to work with their peers. For some, it is having access to professionals in their chosen field. All of these can contribute to understanding, if the learner chooses to do so. I have known very few professors who would turn away someone who was genuinely interested in learning. For the most part, they were more inclined to side with the students who would benefit from learning in a university environment, but were struggling to keep up with the demands of those who would not!
> Then there are the people who are cheating the system by treating the university as a credential mill, the means to an end, where the end has little to do with furthering their understanding.
Is this really an issue? I eventually feel into that bucket. Went to university for EE since I figured I could teach myself CS easily, whereas learning EE without a lab would be harder. I quickly realized I hate a very large portion of EE (anything outside semiconductors) and for most of my courses I did the bare minimum to get an A with minimal understanding.
> They go from being passive leeches on the system to actively destructive forces.
I really don't see how that makes me a passive leech. I agree that cheating is actively harmful to the peers that don't cheat, but someone that doesn't engage is a net gain to the others in my book. By not attending any optional tutorials/office hours it gives other peers more focus time with the professor/TA.
The most trivial example, the requirement for assessment will put a load onto the course staff. While a correct answer presented in the conventional way is a typically easy to assess, an incorrect answer or an answer presented in a non-conventional way is much more difficult to assess.
The people I worked with typically wanted to see students succeed. It went well beyond the mechanics of delivering instruction and assessing work. They considered what they were doing and put time into modifying their practices when students did not appear to be engaged. For a handful of students, it would be successful. In the vast majority of students, it would flop since the disinterested ones were rarely receptive. Some cases may have been similar to yours, where it sounds like you were focusing on other subjects. Some students appeared to be, ahem, more interested in the non-academic merits of university life. It is difficult to tell what the breakdown is since those students were always the most distant. But either way those passive students were a drain. They simply weren't as much of a drain as the cheaters.
Most of what people learn is from the day-to-day, not what they are actually studying. What does that tell you? Instead of ad hominem for 'non-academic merits of university life' perhaps they are not in fact being instructed properly. This makes sense because graduate school is such a grind for personal research instead of actually educating other people.
Teachers need to design their programs around actual learning principles like: Deep Processing, Chunking, Building Associations, Dual Coding, Deliberate Practice.
IMO even CS degrees are worth much these days. I know too many CS graduates that can't grasp the basics, and their work suffers from it. For a recent example (which I hope doesn't get picked apart): even with help they can't guesstimate the complexity of simple algorithms they write.
Genuine interest is a requirement, a degree isn't. Anyone wanting to measure knowledge is even more fucked than we assume.
> Other people genuinely want to understand how things work, and getting a CS degree is not the only way to get there.
It's not the only way, but it is certainly a good way (at least for some).
In my case, I didn't really know what I didn't know. Before studying CS, I had this vague idea of programming as an activity I liked, but it never really "clicked" in the larger sense, as it was woven with magic that I didn't understand at the time. We make programs using Code::Blocks? Well, who made Code::Blocks, and how did they make Code::Blocks without using Code::Blocks?
After studying CS, it just felt like all those mysteries disappeared. Everything made sense, connected into a coherent whole by various mathematical links.
Of course, there's still a lot of things I don't know, and even more that I don't know I don't know. Every now and then, I run into a new concept that teaches me things that I've kinda wondered about, but couldn't put properly into words. Reading about such topics, and seeing the journey unravel, that another person took in order to discover what I'm getting on a silver platter, is one of the most rewarding things in the world.
Even in medicine, the first triage will probably be done by a nurse, then a doctor, finally a specialist.
You don't need a CS PhD for most of the work done with computers and it would be unrealistic and uneconomical to require such a high standard everywhere.
People make a good living customizing WordPress sites and the buyers get good value from it.
>There are markets for a wide range of abilities.
Even in medicine, the first triage will probably be done by a nurse, then a doctor, finally a specialist.
Then why require long time in school and residency for doctors? A boot camp should be enough.
>You don't need a CS PhD for most of the work done with computers and it would be unrealistic and uneconomical to require such a high standard everywhere.
No, you don't need any degree to use Excel or MS Word or modifying WordPress themes. But you should need a batchelor degree if if you want to be a high level specialist, programmer or architect.
>People make a good living customizing WordPress sites and the buyers get good value from it.
Likewise, you need a proper education to be a civil engineer or an architect. You do not need a degree to lay bricks or pour concrete.
I am asking for degrees for the higher levels of IT field, not for the people who modify WordPress themes, whom I am sure do a great job and are highly needed but are not exactly exponents of high level work in this field.
> But you should need a batchelor degree if if you want to be a high level specialist, programmer or architect
I'd like to ask, how are any of these specialties technically related to a bachelors degree?
One does not learn how to be a successful software architect in a semester or two of university. Neither does a bachelors degree make you anything more than a beginner in any specialized topic. While the definition of what's a "good" programmer is up for discussion, universities definitely do not produce them in any dependable capacity.
From my personal experience, at this point in time, major part of the students that complete a bachelors degree are in it just because programming is viewed as a well paying job and are about as worthless as a person that completed a 6 month bootcamp. The only difference being that the person with bootcamp experience might actually end up with an exact match of knowledge an employer wants, which hasn't had time to slowly evaporate over 4 years of "study".
Lastly to bring up another one of your points, I'd argue that learning from YouTube videos is not the same as going to a bootcamp. I do not view either highly, but I'd say that learning on your own deserves some level of commendation as it's the most crucial skill in a field that's constantly shifting and changing.
There was an expression at my school, in response to exactly what you're questioning.
"We train you for the last job you'll ever have, not the first."
The intent of a well-rounded bachelor's education isn't to be able to walk through the JavaScript library du jour from memory, but to have at least a base level of understanding how everything adjacent to the thing you're doing works.
> "We train you for the last job you'll ever have, not the first."
That's a bit funny for Computer Science since, even though I don't have numbers, I expect the average graduate now to work until they're 65+, and the vast majority will probably be out of the field of direct software development by the time they're 45 (burnout, people management, program management, project management, executive suite, etc).
>One does not learn how to be a successful software architect in a semester or two of university. Neither does a bachelors degree make you anything more than a beginner in any specialized topic.
That's entirely true. But if you din waste your time in school, you know fundamentals, you have the necessary background to proceed further and become good or excellent.
> that learning on your own deserves some level of commendation as it's the most crucial skill in a field that's constantly shifting and changing
I agree. You have to continously learn. But having a solid grasp of the fundamentals and understanding how things work will just help you on your path of future learning.
In University you still learn by yourself, but under supervision.
> I am asking for degrees for the higher levels of IT field, not for the people who modify WordPress themes, whom I am sure do a great job and are highly needed but are not exactly exponents of high level work in this field.
The IT field is multi dimensional, a person who is building a 3D engine is not the same person who is going to set up the backend system for your bank or write your kernel drivers. If put everything in one hat you are either going to teach too much or not enough in the field that the student will end up pursuing.
Why do you believe the education sector would be a worse place if we would have 3D engineering, backend engineering, mobile engineering and Wordpress theme development as separate fields? I would enjoy it when hiring people if there would be more specific credentials to the role I want to hire for.
I agree with the general premise, but the job market is insane enough to see this more as trying to cure the symptoms than the treating the cause.
3D engineering and backend engineering are very different, but backend engineering is generally easier than 3D engineering. Meanwhile, mobile engineering and backend engineering are far closer related. If one argues mobile engineering and backend engineering are different enough to warrant separate majors, you might as well argue "console/pc game development" and "mobile game development" do the same. You end up with so many ways to split hairs you're really just appeasing hiring managers being lazy.
To put into perspective the absolute insanity, if it were up to hiring managers, we'd have a university trajectory "Bachelor Java backend developer", "Bachelor .NET backend developer", "Master of Microservices", etc. which are obsolete within 2 years and catching up is left entirely to the individual. The field changes way too rapidly while also denying the similarities between different aspects and the ability to learn most things as long as someone can function as a specialist (and most places have a specialist already). Current courses may be too generalist, but at the very least they acknowledge almost every field in CS is effectively still data creation, modification and storage, while strategizing around physical limitations.
I believe that the cause is the following:
IT has evolved far too quickly for education to pick up and education itself is resistant to change for both good and bad reasons, we cannot change the system every year and expect grades to be comparable.
I do believe that we could create specializations that are not too specific yet useful, the main goal is to get rid of subjects that the student likely never encounter, I mean, we can start teaching history in Computer Science because maybe you'll program the next Age of Empires, yet we agree that the likelihood of that is so small, we can round it down to 0, my issue is that we don't check this for all the current subjects so that we don't waste people's time teaching them stuff they will never use.
> To put into perspective the absolute insanity, if it were up to hiring managers, we'd have a university trajectory "Bachelor Java backend developer", "Bachelor .NET backend developer", "Master of Microservices", etc.
You don't need "Bachelor .NET backend developer", you just need "Backend Developer", someone who has a good knowledge of one stack can easily migrate to another in the same field.
> which are obsolete within 2 years and catching up is left entirely to the individual
- MVC was invented in the 70s, still useful, it's been over 50 years and counting
- SQL also appeared around the 70s
- OOP appeared in the 50s, that's over 70 years and counting
If you know MVC, you can do MVVM.
If you can handle MySQL it's doubtful that you will have trouble with MongoDB.
I'm not implying that there were no changes and you don't need to keep yourself up to date, I'm saying that there are technologies and concepts that have longevity.
>The IT field is multi dimensional, a person who is building a 3D engine is not the same person who is going to set up the backend system for your bank or write your kernel drivers. If put everything in one hat you are either going to teach too much or not enough in the field that the student will end up pursuing.
I think that for a Batchelor degree, things are good as they are. Students are better learning CS fundamentals.
Learning the framework or the language du jour, is easy to do by yourself. Frameworks, libraries, tools, languages come and go. Fundamental concepts will stay.
I did a master in Web Development, so there is some specialization. Others did masters in Data Mining, Machine Learning, Database Technology, Bioinformatics.
I plan to do a PhD related to using ML in Web applications, so there can be even more specialization.
>I would enjoy it when hiring people if there would be more specific credentials to the role I want to hire for.
There are plenty of credentials out there. You just mostly don't get them from universities because universities are not in general in the business of granting trade credentials.
> Then why require long time in school and residency for doctors? A boot camp should be enough.
Someone else answered the question: Nurses do not require a long time. In some (mnany?) states you can become a nurse with a 2 year associate's degree, and career outcome/pay is correlated with experience, not the degree.
And nurses aren't even at the end of the spectrum. You have LPNs, etc.
But the reason I commented: This notion that it takes so many years of school + residency is mostly a US/Canada thing. In many/most countries, you go to medical school right after high school - it is typically a 5 year program.
Sorry but I’m gonna be very blunt. I think sound pretentious as fuck.
> I studied because I wanted to genuinely understand how things work
I don’t think you do.
Genuine curiosity forged in one’s own mind. It is not something that can be bounded, repackaged as a curriculum, and sold in university. It’s like ether, it’s everywhere and can be captured by anyone, through multiple means.
University degrees for any professions are useless. Even in medicine! There are shit doctors and good doctors. Most people here would’ve run through a couple of them before picking one. I’ve been with my current doctor for 10 years now, because they are really good, empathic, and teach me rather than just pushing pills.
Software in my humble opinion works the same way. I care more about what someone does with the tools they have, rather than them being made of wood or metal or gold
>University degrees for any professions are useless. Even in medicine!
I think there is a fundamental distinction between the professions of medicine and law, in that they have licensing boards that are a means of ensuring a standardized minimum amount of competency. Computer science does not.
Despite the fact that many call themselves "software engineers", they are not engineers in the legal sense of the word in US, unless they also have an engineering license. The point of these licenses is, in part, to protect society in professions where they are expected to ethically serve in the public good. One of the problems with CS degrees in the past is that there is no standardized curriculum, so one CS student may have had zero semesters of calculus and another required to take 4. The standardization is what helps pull structure from the ether. That structure is compromised when people cheat.
> I think there is a fundamental distinction between the professions of medicine and law, in that they have licensing boards that are a means of ensuring a standardized minimum amount of competency.
You might want to read the transcript of this This American Life episode:
I'm certainly not claiming regulatory boards are perfect. Far from it. But I do maintain that some quality control and accountability is preferable to the alternative of no quality control and accountability.
We have slightly different opinions, but I thank you for taking the time to chat, and trusting that we can do so nicely.
The boards are to provide society comfort, but they enforce as you, said the minimum. Something just cannot be measured. This is very true in medicine as it contains a human aspect, as well as ethics aspect (I make more money if I see more people and give each less time).
When I was a teenager, I was losing hair due to Alopecia. My doctor at the time, who barely made sense (both of us were ESL) decided to put me on a course of prescription Iron pills. I was pooping black haha. Only later, I was told that I shouldn’t be taking it and that it was prescribed to me by accident by her because she had the file of another patient. Their last name was my first name, and they were pregnant female, I am a male.
The same doctor injected my mom with some drug and as she did, she said “oh shit” and “OMG” and decided not to tell my mom what it was. She tossed the bottle in hazardous waste box so my mom could not find out what it was. My dad was furious and made a scene as he and my mom naturally got worried. We went to this board and they said they did nothing wrong, and my parents were making a scene, and that we should find another doctor. So much for protecting my best interests and holding a bar.
These boards are mafia; another high profile thread about this is in HN right now. The boards are there to provide a facade of credibility.
> The point of these licenses is, in part, to protect society in professions where they are expected to ethically serve in the public good.
I don’t want to be called an engineer and opted out of license because I think most of engineers are doing the exact opposite. Working at companies that knowingly continue operating when we know it’s causing depression? Collecting data for users without them knowing?
Regarding equality of curriculum and standardization, I hear you. None of this is stuff we can ONLY get in school. I think the interview questions we all conduct at our jobs, or give when applying are doing just that. Checking minimum competency; tangentially I much prefer take home tests or something of that nature.
After this, I tend to think learning should be like gardening. Not all gardens are the same and they have different needs. You may need to learn more calculus if you are in robotics, but not if you are working on something really far from that. Another example, you might really need to learn about algorithms and databases if your job/interests require it.
Yeah, you're right. I think in the context of your story, the board seems like they did not do you justice. I can say from my experience with engineering boards, they seem to be more transparent and will publish their decisions and the underlying opinions on how they reached their conclusions. I think that added transparency goes a long way to mitigate the scenario when a regulating body ends up serving as a mechanism to avoid accountability for the group they are intended to regulate.
I think I agree with your gardening analogy. It seems to me that the issue is often rooted in the hiring process. If a company was able to adequately assess the skills, they wouldn't need to rely on credentials, period. I think credentials become a lazy shortcut in many ways. Sometimes I think this is borne from the fact that many hiring decisions are made by people who are too far removed from the work being hired for, and thus need some pragmatic shortcut. It's easier for HR to say "you don't have the right degree" than for them to read and understand your resume to conclude "you don't have the right skills". The first is binary, the latter requires a lot of nuance.
Many engineers in the US do not have PEs even when it's an option. If you're not having to sign off on regulatory agency-related documents, potentially doing expert witness-related work, etc. there's no need for it.
I started the process at one point in mechanical engineering (engineer in training exam) but moved on to a different type of job so there was never a reason to get the certification.
I understand most engineers work under industry exemptions. The ones that do not will have to work under a PE or be a PE themselves. In those cases, the work has been determined important enough to require the additional accountability of a PE stamp.
I'm not a big fan of credentials, but I understand when credentials become a proxy for something valuable. In some cases, the value of a PE is accountability and the legal authority for a PE to push back when they are being asked to do something unethical/unprofessional. I think one of the central issues of this thread is that the credential of a college degree has become so watered down that it has lost a lot of value.
I doubt it, but I downvoted you for saying that. No comment anywhere has ever been improved by adding "I know I will get downvoted for saying this, but". Please don't do it.
(It wasn't passive-aggressive, it was just aggressive.)
The "I know I'm going to get downvoted for this, but ..." thing is annoyingly common, and unfortunately it's common for a reason: a lot of the time it works: it lets you frame yourself as a victim without ever needing to be victimized.
And, strictly as a matter of fact, it's almost always false. I just did a search for HN comments saying "I know I|this will get|be downvoted" and checked out the first ten I found. Only one of them was net downvoted (which that particular one richly deserved), even though several of them claimed not just to know they would be downvoted but to know that they would be downvoted "into oblivion" or some such phrasing.
So, why do I care? Because (1) these things just add noise and (2) I think that on net they get unfairly upvoted; I want to discourage #1 and compensate for #2. (Also, most of the time comments that say "I know I'll get downvoted..." are in fact bad ones, but that isn't the point here. In this case, the comment itself was pretty reasonable. It just would have been better without the look-how-brave-I-am posturing.)
I've got to say I completely agree. "I know I'm going to get downvoted for this" is for me a cue to downvote it. Make it a self-fulfilling prophesy. I don't downvote a lot, but this gets a consistent downvote from me.
Every comment that has that line can be improved by leaving it out.
The cue is insecurity. You get triggered by his insecurity which you unconsciously see in yourself and don't like. This "not liking it" feeling is the "cue" you refer to.
People say "you'll probably disagree" as a defense mechanism. They preemptively expect a rejection, and make it known, in order to make it hurt less. Works in a similar way as self-deprecating humor. "You can't hurt me, if i hurt myself first." "You can't reject me, if I reject myself/you first."
I myself, got triggered by this thread and it's display of emotionally immaturity, because I have some of it myself, and I dislike it with a passion.
I've noted that HN is a forum full of emotionally immature people that are usually polite in the way they show it. This little thread is a perfect example of it. Very off putting, still the threads are sometimes interesting, if we can accept this fact and try to look at the discussion itself.
> You get triggered by his insecurity which you unconsciously see in yourself and don't like.
I think you're projecting something here. I'm not triggered by his insecurity and don't see it in myself. I just think begging for votes doesn't belong here, and begging for votes through reverse-psychology doesn't either. Let the content of your comment stand on its own merits.
In retrospective now I know that I unconsciously sabotaged myself, I knew the best way to pass subjects was to have a basic understanding of the theory and then practice a lot of exam question examples, but I just couldn't get myself to do that until I had in fact understood the theory in depth.
That led to my grades ending up exactly average, but also at one point I challenged myself and got the highest marks in the most difficult course in my degree. Everyone, specially those who were normally top of the class, were like "WTF did you do??" lol
Ha, this led me to fail school the first time I tried it. Now 10 years later I’m back in school with a 4.0 because I’ve learned to navigate the system and I know what the school wants. I still balance diving deep into what I’m interested in, but I don’t let it get in the way of the grind.
But you don’t need a degree to understand the fundamentals. Jus because you don’t have a degree from an institution it doesn’t mean you don’t count as a programmer. You can learn all of this things on your own pace even if you started by learning how to modify Wordpress themes or got into the field after taking a boot camp. My point is (from the original comment) that grading knowledge and ability to produce quality work is a very hard thing to achieve. I would even go further and question whether it’s even necessary. For example you’re likely not getting a job straight after college without facing the company’s interview process. And every company has its own way. So even if you were to solve the issue in academics, it’s likely to not reflect on the student’s ability to get a job and perform properly
>But you don’t need a degree to understand the fundamentals. Jus because you don’t have a degree from an institution it doesn’t mean you don’t count as a programmer. You can learn all of this things on your own pace even if you started by learning how to modify Wordpress themes or got into the field after taking a boot camp.
We can argue this about any other field.
>For example you’re likely not getting a job straight after college without facing the company’s interview process.
We should have a bare minimum standard, not a maximum.
But colleges and universities should be good enough that graduating one means you are in a proper position and have proper knowledge and abilities to start a career. Since that is not always the case, companies do still organize their own processes.
By not graduating some recognized form of higher education in the field, you don't prove to your future employers that you might be good at what they need. You just prove that you weren't willing to do the work for a few years and that you might not have the knowledge. Some won't care as their work is simple enough and they might train you on the job, some will test you harder and some will not get you past screenings.
> Some won't care as their work is simple enough and they might train you on the job, some will test you harder and some will not get you past screenings.
Another option is that the candidate has relevant work experience instead of a university degree. This is the case for a lot of candidates I’ve seen. There are a lot of factors that make Software easier to get into. For example, in physics you need to have foundational knowledge that was created 100+ years ago. Whereas software frameworks go out of fashion every 10 years or so. Of course there are many important CS foundations and design patterns, but I believe those can be absorbed by working along other experienced engineers
Well it's probably down to the harm done if the standards are lowered vs. the gain.
Incompetent doctor? People die. Incompetent chemist? People die, or at least there's substantial material damage.
When it comes to mathematicians and physicists they only ever have any real impact when they roll up their sleeves, open matlab or R and turn their theoretical work into something practical. Does that make them programmers? Probably I guess.
Anyway as for us programmers there are very few jobs where a poorly written program will cause anyone any harm especially since it can be reviewed, tested, and corrected before being used for real, unlike a doctor who must use their skills on the fly and get it right first time, every time. So the bar for entry is obviously much lower and lowering standards doesn't do much to increase harm.
Many, many doctors are completely incompetent, as in, they don't know anything. Yet not all of their patients die.
I have been a "standardized patient" at medical schools; students at the end of their education (after 6-8 years of learning) still don't know shit. And most pass. Maybe they learn on the job... but I doubt it.
Conversely my general practitioners are part of an organization where they are doing their residency. They all are competent, very caring, and effective.
I talked to one of their IT people who told me what a good place it was to work. And had multiple nurses say the same, one going on a 5 minute rant about what a good place it was to work. So that could be a significant factor.
You are completely wrong about that. If they passed their exams, they know a lot. They're no good in practice because they had little practice. Yes, we do learn most of our practical skills on the job. Medicine is very much a 'know-how' profession.
Yeah, it's an exaggeration. They certainly know some things, and some of them know a lot of things. But, they all had big holes in their knowledge (huge, gaping holes) and
1/ They weren't aware of it
2/ They were trained to hide them and appear to know everything about everything, because they're the experts. That's the scary part IMHO.
> They were trained to hide them and appear to know everything about everything, because they're the experts. That's the scary part IMHO.
You've got to realize that we can't really train healthcare workers to admit failure. Culturally, it's not admitted in any society I ever lived in. People get really angry really fast if you don't hide the gaps, as they feel you're subpar and they're being swindled.
> They weren't aware of it
I recently taught an undergrad course, and I must admit I was baffled by the lack of knowledge of the students, and also how little effort they put into their studies. Doctors who don't read books. That's _much_ more worrying, IMO. Grade inflation, and all that...
>Anyway as for us programmers there are very few jobs where a poorly written program will cause anyone any harm especially since it can be reviewed, tested, and corrected before being used for real, unlike a doctor who must use their skills on the fly and get it right first time, every time.
That's as true for any scientific or engineer field, for architects, pianists, lawyers, painters end economists as is for people designing and writing software.
And yet, all those occupations are generally practiced by people with a degree, who did a lot of study and practice. No watching YouTube videos, no 3 week boot camp will lend you a job as a physicist, concert piano player, economist or architect.
Is not that we don't have a high bar in this field, is that we don't have any bar at all. A programmer is a person who calls himself a programmer. Even car mechanics and construction workers are held at much higher standards than this.
"Even car mechanics and construction workers are held at much higher standards than this."
This is not universally true. Many of the best blue collar wokers I've worked with had no formal training or certification, some have. A few trained and certified blue collar workers I've known have been mediocore at best.
The alumni of certifications, official training, and schools are only as good as the integrity of the institution and of the alum.
More broadly people who refer computing to the standard of construction/architecture would likely be severely disappointed if they had a glimpse how the latter is really done.
Until very recently the most skilled people in our field had no degree because such degrees didn’t exist when they started.
Car mechanics went through the same shift where learning how to fix cars was an on the job thing and many still don’t have a relevant degree. Construction work is an old profession, but still mostly an on the job thing outside of heavy equipment.
Has more to do with credentialing bodies holding legal power over who can practice. If you masquerade as a pediatrician and tell every parent their kid needs an hour of exercise and fruits and vegetables, just letting the nurse give the injections, you'll probably do fine in 99999/100000 cases. But that one time you'll miss childhood leukemia because you don't know what you don't know. Likewise, you write SQL injection code and for maybe 99999/100000 visitors you'll be fine. Until the first malicious bot destroys your company's primary DB and you lose hundreds of thousands of dollars in data, and trash your reputation for getting future contracts due to data security.
Wrote a program that helps find patients for donor organs. Make a mistake people die, luckily first real life test 6 people successfully received an organ.
Cause an outage (or write a bug that causes an outage) for like, hospital software, or software that distributes medical supplies during a hurricaine, or distributes vaccines and ... people die. Maybe not directly because it's not your hand with a scalpel slipping but critical things rely on software.
The economic waste that comes from bad code is death from a thousand cuts. Poor reuse and composability resulting in duplicated work, corner cases resulting in cascading errors, seconds of lag adding up to days or weeks of wasted time - years if at Google scale.
Put in this way, the more software there is, the better programmers we want. But the cost itself is typically externalized over the consumer base and amortized over the lifecycle of these products. Further, the perception of software developers being a cost centre first and foremost is sustained. You surpass these problems by being skilled.
> I never heard someone bragging that he is a doctor after watching YouTube videos
Yet real doctors actually believe the bullshit they get from medical reps when it comes to prescribing actual drugs that goes in patients bodies. I will let you ponder how successful that strategy is.
Because most of what is needed out there doesn't require an equivalent of a doctor's degree.
The truth is, is that programming languages themselves have evolved far enough that knowing exactly what's running underneath the hood isn't needed anymore, outside of niche specialist cases. Most people don't even need to worry about seeing a single 'index out of range' issue, or worry about CPU cycles. And it's only going to become easier and easier.
I'd compare it to bricklaying. Yes, you need to use the correct formula for the cement you use, but figuring out that formula has already happened. For niche cases that require special cement, you go to the cement specialists that know the ins and outs of it.
Specialists in, for example, psychiatry don't need to understand how mitosis works, etc...
The same is also true in finance. People who do model equity index volatility don't remember at all how to derive the equation for put-call parity.
In each of these fields there are people who study each of the fundamentals, and then there are people who do more routine "code monkey" work in a narrow area - think chiropractor or vanilla stock trade execution.
> Specialists in, for example, psychiatry don't need to understand how mitosis works, etc...
A psychiatrist has to obtain an MD degree before they can start to study their chosen specialty. There's a reason for that: before you can treat a psychiatric illness, you have to be able to eliminate all other possible causes for the condition. I for one would not want to be treated by a psychiatrist that couldn't distinguish bipolar disorder from brain cancer.
I'm not sure that this example is making the point you wanted to make. There's a reason we have Docotors/Pharmacists/Physios and don't reply on Chiropractory / Homeopathy. It's because we want to get better.
By the time the bricklayers are there to start on the project, most of the time the choice of cement mixture is already made. For most projects, a standard cement mixture is used and a custom one isn't even needed.
When issues do arise during the project, an expert is brought in/consulted. Standard cement formula's might change over time, for varied reasons, but it's not the bricklayers that keep themselves busy with that.
I think you’re over-estimating the difficulty of the average programming job. The simple fact is we have great frameworks to work off of and building things from scratch is a waste of time and money for most business applications. Wordpress is a great jumping off point for like 75% of businesses. If you know how to write some custom theme code I’d call you a programmer. Doesn’t mean you’ll get a job in system-level design, but you’ll be able to pull a paycheck and sustain your life (and potentially support others). What exactly is wrong with that?
I have seen plenty of bad code monkeys who had high grades, the idea that the current 'high standards' give us better programmers is unfounded.
The issue is that modifying a Wordpress theme is just as much of a job as optimizing a low level 3D rendering pipeline or writing facial recognition software. One of these is not like the others and the issue is that universities fail to realise this and just try to teach everything.
In my mind we would need to abolish the idea of a general programmer and move towards specialization.
> Why should this field be hold to much lower standards than medicine, physics, math, chemistry?
It doesn't, a good programmer is self evident to good peers.
And schooling isn't the only way to get there. I know plenty of academically educated CS grads that aren't a great programmer not because they didn't do well in school (I have no way of knowing but I assume they did well), but because they lacked curiosity and interest into programming.
The reason why doctors have to jump through so many hoops is that the stakes are higher for failure.
While there are times where a piece of code failure or poorly worded instructions can cause injury to others, those are exceptions to the rule. Generally speaking- the cost of failure for writing and software is lower than it is medicine, and it makes sense not to gatekeep these industries behind theory, and rather just let results speak for themselves.
My position on this has been pretty controversial when I've shared it before, but I still think it's correct:
Measuring knowledge at scale is futile, harmful, and pointless. The fact that a lot of society has been arranged around the fiction that this is a feasible endeavor does not mean it has borne out in practice, and prioritizing assessment in this way has been gradually hollowing out most forms of pedagogy of their value while building an ever-expanding series of increasingly meaningless hoops for people to jump through to get what they actually need. We have deemed it necessary to create assessments to prop up the idea that education can be easily measured and should gate meaningful life outcomes for most people. Most if not all "cheating" behavior is either just a rational, strategic response to this situation, or a disconnect between how people actually solve problems (e.g. often collaborative and laser-focused on the part of the problem that drives the outcome, in this case the assessment) and some weird cultist notion of what it means for an individual to do it "correctly".
Effective pedagogy will never scale unless we get some really AGI-like technologies (I loved The Diamond Age as a kid, but A Young Lady's Illustrated Primer is from the perspective of extant tech a total pie-in-the-sky fantasy, illustrative of how meaningful teaching requires individualized approaches), and we see time and time again that teacher-to-student ratios as well as particularly good specific teachers are overwhelmingly the drivers of even the stupid metrics we are optimizing for
In short, this whole system is broken because its fundamental premise is flawed
What you are saying is not at all controversial, but it is incomplete, which is probably why you have received pushback in the past. Criticising the existing system is easy. Giving an alternative is harder. Implementing that alternative and showing that it's actually better on some metric is MUCH harder than that. But you have not even given an alternative!
The alternative is to treat higher learning like any other experience in life or on your CV/resume: you do it, you tell people you did it, then you either convince them that doing it imparted something useful on you or you don’t.
As someone who’s hired plenty of people, exams and grades do not help one bit with the process and you shouldn’t pay any attention to them.
The only good use of exams I see see is as a potential entry gate, administered by the place you’re trying to impress, to get onto a course, be considered for a job, or be given a license to do something. As exit gates they’re just noise.
Let us consider the proposal of keeping higher learning the same except that we don't do exams. What would happen?
For better or worse, whether somebody completed their degree does often factor into hiring decisions, so exam grades do indirectly factor into hiring decisions.
Having completed a degree signals some level of domain knowledge, conscientiousness, and intelligence.
Without exams you would have a 100% success rate (unless you introduced some other assessment mechanism),
so the signal would be gone; having completed a degree would only signal whether or not you were able to afford it financially.
Secondly, a large fraction of the students would lose motivation and not do anything by the second year.
Many would stop doing homework, stop doing any real studying except maybe superficial reading, and many would hardly come to class.
In fact, many students currently already do this until the first midterm, even though they know the midterm is coming.
A lot of students need the existence of exams to motivate themselves.
Not all students, but a lot.
These students don't want to lose motivation and waste time; many would probably regret not learning anything for several years.
Our monkey brains are not suited to motivating ourselves to do things with a >3 year time horizon.
Exams are a mechanism to motivate our monkey brains to put effort into studying.
I think if we consider professions that are important to our own lives, we do recognise the necessity of assessment. Would you prefer your doctor or nurse, accountant, electrician, or for that matter, teacher of your kids, to have come from a school that doesn't do exams or from a school that does?
If they are experienced, maybe it doesn't matter, but how many people are going to take a chance on a fresh graduate if they are from a program without any assessment, where the philosophy is "you do it, you tell people you did it, then you either convince them that doing it imparted something useful on you or you don’t"?
There are other ways to solve that problem, and some of them may even be better than the current system, but I'm not convinced that taking the current system and simply removing exams would work.
The evaluation isn’t necessarily the problem, but I think assigning grades may be. It’s gamifies education and I think generally makes things worse.
I think it might be worth spreading some of the standards from medical schools to other programs. Don’t assign a grade to a student, make everything pass/fail. Either you know the material or you do not. There’s no honor roll or deans list and no class rankings.
This is simply not the case. How well the person knows the material is more than yes/no. You'd be rounding the exam result to 1 bit and throwing away the extra bits of information. Maybe it's a good thing not to show that information to the student, but I'd like to see an argument for why the advantages of hiding that information from the students outweigh the disadvantages.
As for competitiveness in education...there are advantages and disadvantages to it. I'm not convinced that class rankings are a good idea, but I'm also not convinced that the optimal target for competitiveness is zero, to the point of not showing students their grades beyond pass/fail. Anecdotally I've seen the aim for zero competitiveness have perverse effects, where students instead start to compete on how lazy they can (appear to) be while still getting a pass, to show how smart they are.
Some of the best medical schools in the world have adopted pass / fail. Either you are good enough to be a doctor or you are not. That makes a lot of sense to me.
Harvard is one example in the US and McMaster is an example in Canada. Neither place has students competing to be most lazy.
What part of "its fundamental premise is flawed" is unclear? I don't propose an alternative because I don't believe the stated goals of the system are achievable or desirable. Also, if one believes something does more harm than good, an argument to stop doing it does not require an alternative.
Are you aware that there are grade-free and exam-free schools out there and that they have been operating for decades?
Measuring the effectiveness of school systems is difficult because of selection bias, but I'm sure you could find some attempts (e.g. PISA) if you went looking.
> HOW do you measure knowledge? And when you decide how, how do you scale it?
When I was in school, many moons ago (in France) there were no quizzes. Zero. "Tests" were either dissertations (for topics such as literature, history, etc.) or problems. Everything was done in class, in longhand.
There were no good or bad answers, even in math class, because what was evaluated was the ability to describe the problem, the approach, and the solution, and you got points for that even if the ultimate result was wrong.
"Cheating" was very difficult; copying what another student was writing was hard and not very effective, because unless you could reproduce their whole argument, just taking a sentence or two would not make sense.
This system didn't "scale" very well; in fact it didn't scale at all.
If you build a system that let one person "teach" classes of hundred of students and generate quizzes that can be instantly rated by a machine, then some (most?) students are going to try to game that system.
This is inevitable, and I'm not even sure it's a bad thing.
> "Cheating" was very difficult; copying what another student was writing was hard and not very effective, because unless you could reproduce their whole argument, just taking a sentence or two would not make sense.
In high school we often were given two (or more) sets of problems so we can't copy off each other because people sitting one sit away from each other have different sets.
I remember at least one test where I wrote down problems from both sets (they were verbally dictated by the teacher at the beginning of the test). Then I just solved both and passed the solution to his problems to the classmate sitting behind me (I was asked for this ahead of time by him).
In Poland cheating is frowned upon by teachers and they tried to catch the cheaters but there were no formal systems in place to report or excessively punished cheating (like in USA).
Yes, although many of the students in the story weren’t interested enough in learning, had “low morals”, “no honor” and some apparently were scumbags, as a group they were somewhat efficiently solving the problem of passing the class… that’s not nothing!
In the real world the solutions to your problems can’t be found online, or if they can it’s valid to search them there (and lawyers will charge you a lot to do that). Collectively searching and distributing a solution is something young people are quite adept at (e.g. gaming wikis).
>> HOW do you measure knowledge? And when you decide how, how do you scale it?
This is what makes the problem intractable. Measuring knowledge takes time, lots of time, by a skilled person. That does not scale.
Since we need (want) scale we necessarily have to use (ever weaker) proxies for measurement. And if there's one thing we do know, you get exactly what you measure for.
Hence, the system is not broken - it's working exactly as intended. It's not "fixable" because there's nothing to fix (at this scale.)
Real learning happens either because a) the student is soaking up everything they possibly can using every resource offered or
B) they've left college and are fortunate enough to be in a workplace where there are more knows than know-nots, and they take every opportunity to soak it in like a sponge.
College does not prepare people for the working world (and never will). It is operating exactly as it is designed to do.
So, the Leibniz argument? Our current system for educating citizens of all ages is already the best it can be, and any change or even reflection upon it is a waste of time.
You can't educate someone who is not ready to be educated. Those that get the most out of college are those that put the most in. This was true 1000 years ago, and is true now.
Yes, this system is the best [1] because access is open to all (which it wasn't). So those that want to go, can, and those who want to learn, can.
What probably needs to change is the understanding of what college is for. It's not to give you an education, it is to give you the opportunity for you to take an education for yourself.
[1] for some definition of best. Not all schools are created equal, nor all subjects, scale is in play here as well.
I somewhat agree about it being a chance for students to take education for them, but there is also the issue of an institution offering a limited view on a subject like computer science. For example some time ago I estimate that mainstream OOP was taught everywhere, while there was almost no place teaching FP (This is changing slowly now). Even if you took every opportunity you had, you might not have even a teacher or lecturer, who is familiar with it. You could only learn on your own, which you would not need that institution for.
Teaching quality is not the same in all places. Teachers and lecturers are not the same everywhere.
Indeed not all schools, and not all subjects, are created equally. And your education is not limited to the specific subjects, or competencies of the school you happen to be at.
> any change or even reflection upon it is a waste of time.
That’s a bit extreme; I interpreted their view as, it’s hard to fix because of intractable issues, but it doesn’t mean we can’t have marginal improvements. Radical upheavals and revamps are sketchy.
In the case of testing, it very much can scale. Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
If a test is already graded on partial credit, which in the field of engineering at least most are, then it's no harder to grade than an equivalent test that has less but longer questions.
This obviously doesn't translate for multiple choice tests where there is no partial credit but at least in engineering those don't really exist outside of first year and maybe one or two second year classes. And honestly, every intuition tells me that those classes that I remembered doing no-partial-credit multiple choice should not be doing so in the first place.
Maths classes like algebra, precalc, calculus, statistics, and linear algebra should by no means be using no-partial-credit exams. That defeats the entire purpose of the classes as those classes are to teach techniques rather than any particular raw knowledge.
Same for the introductory hard sciences like chemistry and physics.
And for the ability to handle those more "bespoke" exams, we really need to be asking the question of why certain students are taking certain classes. Many programs have you take a class knowing that only maybe 30% of that will be relevant to your degree.
Instead of funneling all the students through a standard "maths education" class, maybe courses would be better suited by offering an "X degree's maths 1-3" or even simply breaking up maths classes into smaller semesters where you are scheduled to go to teacher X for this specific field up to week A, then teacher Y for this other unrelated maths field up to week B, and teacher Z until the end of the semester. In-major classes need not do this but general pre-req classes could benefit by being shorted and split up through the semester into succinct fields of knowledge so that maths or physics departments aren't being unnecessarily burdened by students who will never once apply the knowledge possibly learned in that class.
-------------
The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world. No artificial handicaps and at most checks should be made for obviously plagiarized solutions. If that's not a viable testing mechanism, it's probably worth asking why and considering reworking the course or program.
The solution to students not wanting to absorb knowledge is to stop forcing students to learn topics & techniques they'll never use because maybe some X<25% of them will. Instead split up courses into smaller chunks that can be picked and chosen when building degree tracks.
---------------
Edit: I forgot to include it but this is largely based on my experiences not necessarily just on my own as a student but as a tutor for countless peers and juniors during my time at university, and as a student academics officer directly responsible for monitoring and supporting the academic success of ~300 students for an organisation I was part of. This largely mirrors discussions I've had with teaching staff and it always seems to boil down to "the administration isn't willing to support this" or some other reason based on misplaced incentives at an administrative and operational level (such as researchers being forced to teach courses and refusing to do anything above or often even just at the bare minimum for the courses they are teaching).
> Tests need to be based on long form questions that test comprehensive knowledge. Open book, Open notes, and hell even open-collaboration up to some limit.
Coursework is already along these lines, no?
> The solution to testing students in a way that they can't cheat is to simply design tests that require students to apply their knowledge as if in the real world.
How would this apply to a course in real analysis, say?
University education generally isn't intended to be vocational.
It is but exams are not and if the intent of exams is to test knowledge, they should be in a format that is applicable to the real world and one that can't easily be cheated. Also for what it is worth, for essentially all of the courses I took in university, unless they were explicitly projects based classes, exams were the overwhelming majority of the grades in the course (often ~75-90%).
What this meant in practice was that exams that were closed-book, closed-notes often had averages in the 30s or 40s where everyone got curved upwards at the end of the day while open-book exams had averages in the 60s-80s and students who could apply their knowledge passed the exam while students who couldn't didn't. I can't recall a single course with the latter style of exams where I passed without knowing the material or failed while knowing it. For the prior however I personally experienced both and witnessed numerous other students go through this at the same.
> How would this apply to a course in real analysis, say?
Sorry if I wasn't clear but when I said "as if in the real world" I was referring specifically to students having access to the same resources they would have in the real world (aka reasonably flexible time constraints and with access to texts, resources, and tools) not necessarily that the questions needed to be structured as "in your field you'd use this like this" kind of questions.
Unit testing is also frequently very artificial and disconnected from production use of a codebase. Nevertheless, there is a great deal of value in checking whether things you wrote actually do have the effects you intended.
> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount
I have some thoughts about the education system, and despite not being a teacher or academic I like to believe that my opinions have some value because I'm an expert programmer that has worked in the field for over 50 years. I attended three different major universities and have degrees in Math, EE, and CS. I still code almost every day (my Emacs configuration is never finished!), and I have in the past taught or been a teaching assistant for both undergrad and graduate courses for four semesters. Cheating has always been a concern, but now things are different.
The original article highlights the scale of exam cheating during the pandemic, but for us, the readers of HN, there is another problem with university learning that happens because of the internet. I've seen this affect virtually all of my younger friends pursuing degrees in CS. Programming assignments in school are unrealistically difficult, and it causes everyone to cheat.
Here's a typical real-life example: after covering doubly linked lists in the undergrad data-structure's class the programming assignment is to write a GUI based text editor in Java using doubly linked lists. This isn't especially hard for a professional programmer, but this is the first programming assignment of the course. Students had to wrestle with Eclipse, learning the AWT/Swing interfaces, event loop programming, and how to translate low level pointer based data structures into non-idiomatic Java based imitations of linked lists that kind of simulated using pointers. Most of the students really couldn't do this on their own, but they didn't have to because they can find the solutions to this very problem right on the internet.
Why would professors give such a program to beginning programmers to write? Because every student turns in a solution, and this causes the professors to lose touch with how difficult their assignments are. Over and over again difficult assignments are given, but the students are seemingly keeping up. The bomb lab assignment is a great assignment for CS students[1], but I've seen it given out with far too few attempts allowed to solve it. Again professors feel like a small number of attempts is all the students should need, they keep turning in the answers. The reason they can is that the complete solution is available on dozens of public Github repos and web sites.
The consequence of such hard and challenging programming assignments is a kind of inflation of the difficulty. The high difficulty causes students to cheat more, since their fellow students are cheating by downloading, cutting and pasting, or simply sharing their programs. There are commercial web-sites like chegg.com that sell the solutions to virtually every homework problem found in CS textbooks. Why should an undergrad spend so much time working on their own homework solutions while other students work openly in big teams at table in the university library?
This kind of cheating is pervasive at the undergrad level. How do we prevent our students to being pushed into cheating to keep up? Graduate school is different, the classes are smaller and more interactive. In my grad school classes I've often had to go to the board to demonstrate my code or proof to the class. Professor Dijkstra used to give individual oral exams to his students. So small interactive classes would help.
I've also seen assembly language programming classes given that require all work to be done on lab computers. The lab computers weren't on the internet and students had to sign in with the lab proctor to use the machines for their assignments. This at least helped some with the problem.
If I was teaching a programming class now, I would require everyone to maintain a git repo that could be checked for realistic commits of the programs as they are written. This might discourage the simple copying of a solution from GitHub the day before the assignment was due.
[1] The text for the bomb-lab assignment (highly recommended by the way):
Computer Systems: A Programmer’s Perspective, 3rd edition,Bryant and O’Hallaron, Prentice-Hall, 2016 (ISBN: 0-13-409266- X), a google search will return many bomb-lab assignments and solutions from colleges all over the world.
> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Finland topped Pisa rankings many years, because we 1) listen teachers and have good academic pedagogical research, and 2) teachers are highly educated and reasonably well paid, meaning that the job is attractive to competent people.
Then politicians started to think big and read all the hype papers from think tanks about digitization and how the young are digital natives. Let's give them computers and they learn by themselves and ... we started slipping. Still OK, but slipping. It turns out that computers are not magic. Having all the information accessible is not a pedagogical solution.
ps. Chinese studied Finnish school system and imported some of the best policies in Shanghai and it worked. Some lessons work across widely different cultures.
As a Norwegian who lived in Finland for a year, it struck me that the parents I met actually CARED what the kids learn in school, instead of just treating school as daycare for older kids.
This, combined with the possiblity for good teachers to gain respect in their communities, is what makes Finnish schools more effective learning environment than Norwegian schools, I think. Not salaries, some specific methodology, etc.
> do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues.
Because it conflates two things with conflicting incentives. This could and should resolve nicely.
1. Spreading knowledge
2. Certifying competence
To get e.g. a RHCE you may or may not attend the course. You may get the materials elsewhere and study from them, you might get tutoring from someone else who attended the course, you may have enough experience from your day job. This is knowledge acquisition.
Then you attend the certification exam and either succeeded or fail.
If you fail, you get back to knowdge acquisition. Decide to pay for the course this time. Get tutoring. Read the materials again. Maybe retry right away because you were just stressed and disoriented. Then you succeed.
Compare this with college. Fail a couple of examinations? Too bad, you are booted. Want to try again? Repeat up to two years! This is absolutely insane! No surprise people are cheating their way through!
Decouple knowledge acquisition from competence certification. Managed to reach end of the math track but failed physics? No problem! Certify math competence and let them study physics some more! Got enough certifications to warrant a title? Cool, give them the title!
Make it possible for people to step away for a couple of years and then come back to earn some more certifications and even the title when they actually need and want to learn those skills.
Make it possible to study 1/3 of your time for 15 years. Maybe people would stay in the learning mode longer. Unlike many doctors who are hopelessly behind the times. Make it possible to study with kids or sick parents to take care of. Make it a part of the adult culture.
Not something people had to suffer through in their youth to earn their place in the world.
This is it. To expand on further on why it's so crazy to couple education and credentialing - already know all the material in a class? Too bad, you have to pay for it and spend time taking it anyway. Is it a class that's completely unconnected to your field? Too bad, the university is making you take it, so you take it. Is the class taught poorly, so that you need to teach yourself outside of it? Too bad, you still have to pay for it and put time into it, in addition to actually teaching yourself the subject.
The education is the major chunk of time and cost, but the credentialing is what most people are trying to get. By forcing people to buy them together, you can make people pay a lot (in terms of both time and money) for an education they find little worth in just because it's the only way to get the credentials.
So true. And another benefit would be that domain experts giving a course could focus on teaching and sharing their knowledge instead of being forced to deal with all the organisational fluff around final grading and "catching cheaters" that is a giant waste of their time. (I see only usefulness in grading as a feedback mechanism for students – but not as "certification" of student's knowledge for the outside world. I also believe it would be more healthy for both students and teachers if you the grades were just a guidance tool, not something that will affect your future prospects at life).
At the end of the day the final grades from school / college grades depend on so many factors that this signal is close to noise anyway, but in college it often feels somehow more important than the actual learning and so much time and stress is spent on them.
In a better world I imagine it would be the organisations that need specific knowledge co-sponsoring "exam centers", separate from colleges, where you could go and get a certificate saying how well you know a given subject. Private companies that want to hire the best people actually have a good incentive to make these exams as fair and useful as possible.
To make an analogy with GAN networks in deep learning: the college would act as a generative part and "exam center" would be the discriminative part. It seems to work pretty well in ML, maybe it would work in education too?:D
I've thought engineering licensure found a reasonable balance.
Everyone has to pass the two certifying exams for their discipline, but there are multiple paths for assuming somebody has acquired the knowledge for the exam, ranging from years of industry work to passing standardized tests to having a college engineering degree.
It seems to me that a lot of problems in the real-world can be tracked down to unnecessary dependencies (in this case, having to attend college in order to get certified).
>These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.
I don't agree that the system is broken (broken to me is something that is completely unusable, and we must stop using immediately). The progress we've made as a global civilization is to be credited to the way human knowledge is captured, distributed and taught by us as a species. And certainly formal schooling is a big part of that. And so, I'd rather view the situation as us being on a path of continuous improvement - where everything, including education can be improved.
My opinion is that the educational system in most industrialized countries today rewards the wrong things and that the quality of education suffers to allow an easier time of mass-grading and classifying.
Whether this means it’s “broken” or not is of course completely subjective because it depends on what you think the educational system should be doing in the first place.
>completely subjective because it depends on what you think the educational system should be doing in the first place.
Yup, you nailed it. That is the crux of the argument. I think it also leads into the meta discussion of what it means to be a "productive member of society" and how education fits into that philosophy. Why should one be forced by society at-large to be educated, or productive, or anything at all? :)
> Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
No, we should listen to the people when deciding what the purpose of school should be, THEN refer to the experts on those purposes. Is it teaching random factoids? Making people "cultured"? Separating out people who follow instructions and learn well from those who don't? Introducing habits useful in a workplace? Good habits of thought? Teaching the knowledge required to vote sensibly? To provide some foundational knowledge for later vocational training? To navigate and function in the modern world? Is it just day care for kids?
First decide on the purpose(s) (and their weightings if many) and only then can we have a plan. I think there hasn't been anywhere near enough thought given by most people about what the purpose(s) of schooling is(are).
> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Well, yes, but at some point we look at the system, see human beings spend over a decade of very precious years doing just that, and not really getting over a decade's worth of benefits.
If we just want to incrementally improve things then definitely we should let specialists have the most weight. But listening to educators will absolutely never lead to major reforms or (god forbid!) reducing the years spent in the system.
Students are just as much a part of the system as teachers, so I don't think this elitism about who can have an opinion is helpful.
I think there are a few constructive things that can be done. One is allowing curious students to design their own academic career (with guidance and supervision). I think students usually cheat because they think the course work is irrelevant to their future lives. Sure people need to be exposed to new things, but a semester on something you know you will never care about is torture. I have a computer science degree, but I remember being forced to take geology. To this day I can't think of a bigger waste of time, I remember nothing from it, and even if I did I would never use it.
Vocational schools and apprenticeship should also really come back. I know parents want their kids to be part of the affluent elite, but in a good society being a car mechanic should be a good life. There's no point saddling people with student debt if their degree gets them a job at starbucks.
I also think that things like essays are a lot better than quizzes. Sans plagiarism, it's hard to fake knowledge if you have to write it out.
> Students are just as much a part of the system as teachers, so I don't think this elitism about who can have an opinion is helpful.
Of course everyone can have an opinion! But are these truly likely to contribute to solving the problem? Of course not. Some people are more likely to have the experience and skills to comprehend and advocate for better solutions.
Yes, I tend to support democratic forms of government over others. However, I'm under no illusion that democracy's broad, sweeping claims about what government are "best" are really defensible when applied to the general problem of collective problem solving under real-world constraints.
Having one person, one vote seems intuitive and valuable for certain decisions. In particular, it seems useful and practical for selecting certain representatives. But I (and many others) don't think it is a great way of making policy decisions in general. Just as one example: committees of experts can make sense in some contexts.
But in general, we can do better than what most of us have seen so far. We have to do better than that. Look at how well government(s) at all levels are serving their constituents. I think it is self-evident that all can stand tremendous improvement.
So, for any particular context, think about how to design mechanisms that are likely to work well. In so doing, one must account for many factors, including: human biases, cognitive limitations, cultural differences, imperfect communication, economic costs, time constraints, factions, self-interest, lack of experience, and so on.
Keeping these in mind, how exactly would you select, organize, and structure an ongoing set of interactions between, say 1,000 people such that one can maximize the quality of their resulting collective recommendations?
One option is to choose 1,000 people at random and weight their opinions evenly. But this is underspecified. How do you compress those recommendations into a form that others are likely to read? How do you discover collective preferences? There are dozens of key questions even if you generally adhere to the idea of "equally weighting each person's opinion".
But there are manifold other options where each individual's starting opinion is not the driving factor.
I encourage everyone here to study political economy, history, philosophy, and anthropology. Disregard your preconceived framings of how people make decisions. Look at how others have done it. Look at what theorists suggest might be alternatives. It is an amazing journey. I've been thinking about it for almost twenty years, and it is just as fascinating, if not more, as when I first got exposed to these ideas in policy school.
Well, in the private world its customer feedback. Sure you dont use their ideas neccessarily, but if, as in this case, 75% of your customers find your product bad then expecting people already part of the system to make radical changes sounds foolish. Theyll just list why it cant change. I admire this teachers cleverness getting his students to pass, but the reality is that it was so big a problem that the system can only be vastly broken and I dont expect people too involved to fix it
I can't see how this addresses the main problem mentioned in the article. My take is that if a student cheats, they should be expelled from university. If you're very lenient, introduce three strikes. The first two strikes will nullify your course as if it hadn't been taken, the third will get you expelled from university. I personally think that would be too lenient, though, and believe that nobody who cheats in any way should have a place in academia. This question has nothing to do with the quality of teaching or problems with tests, etc. It's a matter of intellectual integrity.
When I first heard that students often cheat in higher education not too long ago, I was shocked. When I studied during the 90s at two good German universities, I had not ever heard of anyone who cheated in any course. A cheating student would have been a huge scandal. To be fair, I studied philosophy and general linguistics. I guess people in more practical disciplines cheated even then, e.g. economics -- more specifically, "BWL" in Germany -- always had a bad reputation. However, even in these disciplines cheating was rare. Its incomprehensible to me why lecturers and universities nowadays appear to be so lenient about it.
The AP classes in American high school, which include a test which can provide college credits if passed were great in my opinion. Mostly because I felt the tests were really good. I took 11 of these tests and I learned a ton that has been relevant and stuck with me ever since. In particular statistics, comp sci, and Spanish seemed really good.
Spanish was a hard test. It involved listening to pre recorded conversations and giving responses.
Comp sci I didn’t take the class, just self studied for the test. It was my first exposure to comp sci and only intro to object oriented code. The test made you utilize an API for a little toy problem. That was very good in retrospect. I didn’t really grok APIs until that exact moment on the test. 12 years later fiddling around with game engines, object oriented concepts still seem familiar.
I think the two things that made these exams good is they were very broad so you needed to have mastered the whole course, and they were not designed by a teacher incentivized to give good grades, so they were pretty hard and didn’t advertise exactly what would be tested.
Not needing 90%+ to do very well on the test was good too. So much of school is avoiding tiny mistakes on otherwise easy content to get a perfect score. Not broadly getting the concepts mastered.
Some neighbor schools offered AP classes but it was culturally accepted that students would not get high scores on the exams. Struck me as pretty pathetic. That was a rich kid private school doing worse than my (admittedly fairly wealthy) public school experience.
> do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues
I think the reason for feeling of competence that prompts so many people to share their opinions on the matter is that nearly all of us went through this broken system at some point in our lives and our future lives literally depended not so much of what was taught but what was written down as the result of the teaching.
> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Yes, because they've proved they know things by passing through the system and getting good grades. Oh wait...
Sorry for the joke, but seriously, you can't expect us civilians to shut up. Leaving education to educators is as pernicious as leaving law just to lawyers, or journalism just to journalists. In all cases, the outcomes are everyone's business, and because there are real conflicts of interest here (and not just disagreement on facts) it can't just be delegated to experts. Even calling for it will make people rightly suspicious of your agenda.
Unlike with law or journalism though, pretty much everyone has A LOT of experience with the educational system in practice, by being on the receiving end of it for 12+ years. There's a challenge with sharing our experiences in a fruitful way and not just shouting over each other, sure, but suck it up: we have opinions about what education should be and can be, and we won't shut up and leave it to you.
FWIW the teachers lack perspective. I've dated several teachers and listened to their side about why they teach the way they do. I would propose simple solutions, like a continuous improvement cycle, and educational experiments conducted at random by regular teachers, then reproduced and cross referenced to build new models. They had never considered these ideas before.
When you live your life in a rigorously controlled institution, you only consider what the institution echos. Outside the box thinking is possible, but it's the exception. You need outsider ideas and collaboration.
Politics will never solve these problems. It has to be grassroots and volunteer driven.
> I would propose simple solutions, like a continuous improvement cycle, and educational experiments conducted at random by regular teachers, then reproduced and cross referenced to build new models. They had never considered these ideas before.
I've never met a teacher who didn't do those things; I have met many who wouldn't phrase it like that. Just because they're not using the same terminology as you doesn't mean it isn't happening.
It's very easy to look at a system from the outside and think that they're missing the obvious [1]; things become more complex the more you understand them.
And then us engineers come in and fix education just like we did taxis (regulatory arbitrage, offloading costs to the ordinary workers in surprising ways they aren’t aware of, increasing traffic congestion throughout cities, but hooking people onto the rides with unsustainable loss making introductory prices long enough that alternatives such as regular cabs and public transport become worse).
Or the way we fixed productivity in ways that has led to no measurable increase in productivity despite nearly everyone having the most powerful device ever invented in the palm of their hands.
Or the way we fixed housing through regulatory arbitrage, once again, converting housing for residents into short term rentals for vacationers, making housing for residents more expensive globally and making their communities worse.
Or the way we fixed cable by going from bundled cable packages where we have to pay $70-$100 to get all our channels, to unbundled walled gardens where we have to pay $70-$100 to get a fraction of content plus we also have to pay internet fees in addition.
Or the way we fixed messaging and phone calls by taking something like a $1/yr WhatsApp membership that offered safe encrypted chat and converting it into a data harvesting machine.
Or the way we fixed stock investing by gamifying investing, bringing in a lot of people into active trading who have no business being in active trading and should just park their money in Fidelity, and then promising “free” trades by allowing big banks to trade against, leading to a massive wealth transfer from naive individuals to sophisticated banks at the best of times.
The teachers I've known don't really care about measuring knowledge. They're looking for a reasonable way to motivate engagement with the class, that's not too disruptive of the overall flow of the course. One professor told me, "A student who has made an effort to work through the homework problems a couple times should be able to easily get a B on the exam.
Testing also acknowledges that you're competing for your students' attention, and if you give no assessments, your students will rationally focus all of their effort on the courses that do. Preparation for the test becomes a reasonable measure, not of your knowledge, but of how much effort you need to apply to a course. Since students have been taking exams for years, each student knows how to calibrate their own level of effort.
As a student, after some trial and error, I developed a pretty good routine for getting A's in the two kinds of classes I was taking: Those that were dominated by solving problems and proofs, such as math and physics, and those that were based primarily on written assignments, such as art history.
I can tell you that almost every teacher who is not burnt out, does care about how we measure knowledge, mainly because they have to. The big difficulty is that there are two roles for teachers, on one hand you are a mentor and supposed to impart knowledge onto your students (the teaching part). On the other hand you are a gatekeeper, you are supposed to check that the thresholds for some qualification are met. Now if we had an ideal way to measure knowledge those two roles would not really be in conflict with ech other, but because we don't teachers have the difficult job of trying to teach a subject and at the same time find a good way to see if the students actually learnt what the were supposed to. All that with a limited amount of time that is available.
What do they do with the information? The threshold in most courses is to be able to pass the next course. The students who won't do that, tend to drop out, or switch to an easier major, of which there are many.
Teachers do tend to change their content and methods if a large number of students are failing exams, but I think it's based more on a hunch, than on hoping that test scores will yield analytical quality data. This is the sense I get from talking to a lot of teachers. My only teaching experience was one semester at a big ten university, a long time ago.
You make it more important to be eventually right than initially right.
Allow tests to be continuously regraded as the things students get wrong are corrected.
Automation would go a long way towards making that more feasible (i.e. easier for a multiple choice test than a written one).
But the emphasis on being right initially as the only thing that matters is unhealthy, and certainly in part what leads to the majority of people doubling down on confirmation bias rather than admitting being wrong and learning/incorporating the knowledge for the future.
Yes, there are practical issues with improving the system. But I've had a few select teachers that had that policy in some form years ago, and it was often the best teachers that did. We'd benefit from a widespread adoption of similar and it might lower the inventive for trying to cheat to be right the first time, as to the kids being brought up in these systems and reflecting these systems, that's the only thing that matters.
This is not the issue, this is the root cause of the issue.
You DON'T measure knowledge.
You should measure the satisfaction of the students.
Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.
> maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
pity that academics and teachers often disagree and, most of all, that schools are public and payed by people's taxes in many developed countries in the World, so people have a right to say.
Teachers are not doctors, doctors practice medicine, teachers do no operate in such a stressful environment, they "educate" young people and is is often the case that it means they impose or suggest their opinions (because they can, nothing prevents them) and families see that kind of "education" unfit for their kids.
And they have all the rights in the World to be listened too, even if they are technically wrong or I disagree with them (I completely disagree on catholic schools for example).
The experts are there to find a solution to their problems, not to build hypothetical perfect solutions in a void.
Also: teachers are there because students are forced to go to school, so they serve, they do not lead. In my country (and practically all other countries in Europe) they are like bus drivers, they are fulfilling an obligation required by State laws under the State government but also offering a service the people paid for to the State.
Maybe instead of listening to "our" teachers and academics, we should look at places where the system is proven to work and copy it: see Finland.
CONTROVERSIAL
On a last note, there's a topic I believe it's the most important, that will quite certainly cause uproar.
If your youngest students die in school shot by someone just a bit older than them, the society you live in have failed in every possible way.
The fact that the system is broken is a joke compared to that.
> Maybe instead of listening to "our" teachers and academics, we should look at places where the system is proven to work and copy it: see Finland.
I did an education degree, and come from a family of educators. Every educator and academic I've talked to (I can't remember an exception) wanted our system to be more like Finland's. The people pushing back against changes in that direction were not teachers, but politicians, parents and high-up administrators.
>Teachers are not doctors
Indeed. And you wouldn't tell a doctor how to do their job, even if you had spent years as a patient. People in the education system have opinions that are informed by years of experience in the field and decades of research. With respect, I'm thinking you are an example of the type of person described by the comment you're replying to: not much experience inside the system but confident in your opinion of how to fix it.
> schools are public and payed by people's taxes in many developed countries in the World, so people have a right to say.
My country, and yours too I think, pays for health care with taxes along with education. Again, does that mean you and I get to tell a doctor how to do their job?
> teachers are there because students are forced to go to school, so they serve, they do not lead
Teachers existed long before mandatory attendance laws. Also, what point are you trying to make with this statement? That because they are necessary by law, their professional opinion is negligible?
> doctors ask patient how do they feel all the time.
And teachers _constantly_ monitor how their students are doing - "feedback from their customers" if you want to put it that way. Talking with them during or after class to see how things are going, assessments on homework, projects and tests, parent-teacher interviews, individual learning plans, collaborations between teachers... I would venture to say "making sure a student is doing well" takes up most of the time of the job.
A patient complaining to their doctor about some treatment not working is not telling the doctor how to do their job. Your first comment made a claim about how student assessments should be done. This is something at the heart of pedagogy and has been studied and experimented on. The analogy to health care would be like if you declared the ways in which doctors should screen for cancer. Nobody without medical training would ever think to make such a claim, but many people seem quite confident in making similar claims about how the education system should work, as you did in your first comment.
I'm not even making an argument about whether you are right or wrong. There are many ways in which assessments can change for the better (educators would be the first to agree with you there). But to then go on and say "you shouldn't measure knowledge, only student satisfaction" without really showing an understanding of how knowledge or satisfaction are currently - or potentially could be - assessed... are you up to date with recent literature on these concepts? Do you have experience performing these kinds of assessments?
I'm still not sure what point you are making with your second section in this comment so I won't try to respond.
> And teachers _constantly_ monitor how their students are doing
They actually don't do it, not constantly, nor as a way of improving teaching.
They monitor their output, but rarely listen to what the student have to say.
In the end it's not their job, their job is to teach what they are told to teach, they rarely go on a limb for their students.
Because their salary does not depend on it.
But as personal story I've always had a conflict with my Italian teacher in high school, I've always been an A student, even after high school, but she hated my temper, so I've always been graded C (I believe it means sufficient in some parts of the World, for us is 6 in a scale from 3 to 10) and when in our latest test I've submitted the assignment of her favorite and she submitted mine, she was graded 9 and I was graded 6, again!
I won't tell you what color her face was when we told her the truth.
You are going through all of this, no matter what, there's always gonna be some bad teacher and yo can't do anything about it.
And since the mentality is "don't tell a professional how to do their job" it's always the student's fault.
That's why I think student should be asked if they are satisfied of their teachings, not of their grades or about how much fun they are having, but of the people teaching.
> I would venture to say "making sure a student is doing well" takes up most of the time of the job.
I'm glad it was like that for you.
It isn't so common in places I know.
In my country they spend 12, 15 maximum, hours - by contract - a week in school as high school teachers, that is when students need it the most.
Let's start by making them work 30 hours a week, it's one of the few jobs left where presence is fundamental, but we still keep treating teachers like those poor souls who have to grade a bunch of four pages written tests, like if computers have not been invented yet. It takes them weeks usually.
> A patient complaining to their doctor about some treatment not working is not telling the doctor how to do their job
I think I have not been clear: they are not complaining, they are being asked questions and depending on their answers the doctor can (should) understand if the treatment is working as intended and not causing too many contraindications .
So the analogy in education should be smth like "what's your favorite Renaissance author, and why?" not "What's the date that changed the life of Machiavelli forever?" (real question from a real questionnaire)
There is no talking to them, nobody grades them for liking profoundly horror movies and writing beautiful essays about them, because it's not "part of the teaching program"
> The analogy to health care would be like if you declared the ways in which doctors should screen for cancer. Nobody without medical training would ever think to make such a claim, but many people seem quite confident in making similar claims about how the education system should work, as you did in your first comment.
I haven't said anything of the sorts.
I am simply saying that if half of the class is getting bad grades in maths you should blame the teacher, not the students.
But bad teachers are allowed to teach anyway, because they are not responsible for their bad teachings.
At least in my country they can't be fired even if they are literally doing nothing.
> without really showing an understanding of how knowledge or satisfaction are currently - or potentially could be - assessed... are you up to date with recent literature on these concepts? Do you have experience performing these kinds of assessments?
I have a few ideas.
For example monitor what subjects show the worst grades or the highest rates of absence from school the day of a test.
These are all basic symptoms of fear and anxiety.
It doesn't take a Nobel prize to understand basic human emotions.
Let's try to understand why, the subject could be really hard or the students really stupid or it could be the teacher. Anyway, being stressed by school it's not something that motivate students.
you could simply ask them to grade their teachers anonymously a couple of times a year.
Internet forums are full of cry for help from students not understanding why they are asked such silly questions and what's the point.
We could monitor those forums, for example...
Unsurprisingly when these kinds of discussions come up, unions complain and go on strike.
And I am all in favor for unions, I have been union delegate in companies I've worked for, but school unions, they are a corporations, at least here.
I've talked to some of them, of course what I'll say is anecdotal I don't pretend to know everyone of them, but when asked why they don't want teachers to be paid better instead of a lot of teachers badly paid who don't do anything important for __education__, they told me blatantly that they prefer two jobs at current wages than one job paid double. They can spin it as a victory. They also told me that if newer teachers are paid better, old timers are going to complain ans tart asking for the same pay (it's kinda impossible here to pay two people different wages for the same job, especially if it's for the public) and that better salaries would encourage more prepared teachers to start teaching and that would look bad for the rest of them.
That's the state of our education system, I hope it is different in all the other countries but according to my friends living all over Europe it's kinda the same everywhere, especially during COVID crisis, where families where left to solve problems schools would not solve, because they couldn't get teachers to get vaccinated or to go to school.
Except, of course, for a few exceptions, that I already mentioned.
But, back on topic, if people studying the subject have no idea, well, that's a problem, don't you agree?
If we wanna keep grading people and "judge a fish by its ability to climb a tree", I think pedagogy is not doing a great service to future generations.
Let's not forget that teachers quote pedagogists when it favors them, but when it goes against their interest, they criticize them saying that "they are talking from their ivory towers. they don't know what's like being a "street" teacher"
Not all is lost or grim, teachers still fight against school commoditization, they still fight against schools as furnaces that generate young workers/consumers, but there's still a lot of conservatorism disguised by idealism.
> Do you have experience performing these kinds of assessments?
As a matter of fact I do.
I wanted to be a teacher, I was discouraged by how limited the space for new ideas was.
In my family, that is very big as I've said, there are teachers.
All of them keep doing it because it's a safe job and the salary is granted, none of them is satisfied of the work they are doing and would gladly do something else, if they had the opportunity.
They all feel like are doing nothing substantial to help the students and that the students know it, but going against the status quo would cost them too much. They tried, they've been burnt, they gave up.
So to get rid of the guilt they grade everyone good, at least they are not unpopular.
That is a lot to sort through and I'll try to pick out the points that are relevant to the discussion we started.
> I haven't said anything of the sorts. I am simply saying that if half of the class is getting bad grades in maths you should blame the teacher, not the students.
Yes, this is exactly what you said:
> "This is not the issue, this is the root cause of the issue. You DON'T measure knowledge. You should measure the satisfaction of the students."
You quite explicitly made a claim about how teachers should assess students. Then I suggested that maybe you should take a step back and question whether you are qualified to make such claims. Now, it seems like you've doubled down, and written a diatribe which superficially touches on a half dozen issues in education. I'm simply pointing out this irony: that the commenter you first replied to was lamenting how so many people outside the field of education feel qualified to make claims about pedagogy. Even if their expertise is limited to, for example:
> I wanted to be a teacher... In my family... there are teachers
Personally I always loved STEM topics, and would go out of my way to learn about them. This ended poorly for me in school, as I ended up being incredibly bored in the STEM classes, as they were filled with content I already knew. Then the other topics I didn't love, and largely did not like to experience them. So in the end my satisfaction was miserable, and I dropped out of 7th grade.
Eventually I got a GED and went to college for CS, but it was that time in-between those two that even allowed that to happen. I needed time to explore the world, find what I wanted to know, and figure out how school can help me get there.
As someone on the other end of the hiring table now, I don't even care about knowledge. Knowledge tells me how far you've got. I don't care how far you've got, I want to know how quickly you pick up the material based on the job I'm hiring for. I care about acceleration. While the two can be correlated, it's not precise. There's not a single hiring test that I can do to figure out someone's acceleration. What I do know, that testing the farthest on some topic as a metric, like leetcode does, it's going to fail every single jack of all trades programmer.
> Personally I always loved STEM topics, and would go out of my way to learn about them. This ended poorly for me in school, as I ended up being incredibly bored in the STEM classes, as they were filled with content I already knew.
Thanks for posting this.
This was my experience as well, with the added malus that when I went into school, people were still saying things like "what do I need maths for?" or "a computer will never write the next Dante" and things like that, so not only it was frustrating, it was borderline painful and lonely.
Then I discovered kids, I don't have kids on my own, but I have a very big family and I am grateful for being surrounded by people younger than me of any age from 3 to 20.
I saw them being entertained by the most boring stuff just because it was new to them and build up from there, at an incredible pace, and become young experts, with all the limits of being inexpert and also being kids, in a very short time.
I realized that what kept them motivate was a feedback loop that needed no external validation: knowing more about that thing made them happily satisfied and so they kept doing it. They don't care about understanding things the wrong way, eventually they'll get it right, they don't care about not doing any mistake, eventually they'll learn to make new mistake, they just wanna learn more and experience more.
What you call "acceleration".
I saw most of them struggle in school because they were bored, they were getting good grades, most of them at least, they were kin to put up the work necessary to get them, but their motivation started lacking, until they arrived to university and chose something that could (potentially) assure a good job or would make their parents happy.
It's a sad state of things, if I think about it, but it's also a "great filter" and we should strive to make education something that adapts to people receiving it (I'm not talking about schools for the gifted or smth like that) and not the other way around.
When I was in my 30s a friend of mine married a woman from Finland, who was living in Sweden, and then they moved back there when they had kids. I've visited them on many occasions and when I saw how they intend school there I was astonished.
They are not tracked, they are not tested, there is no standardized grade scale, there is virtually no homeworks, they do not compete, they learn by playing and are simply thought that you have to get the basics rights to go on and then helped to follow their paths.
I think that, in general, it makes happier adults.
> You should measure the satisfaction of the students.
> Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.
But if they aren't actually improving and learning, their satisfaction and desire to continue with what they were getting isn't desire to keep improving and learning.
Self-improvement theater is as much a thing as security theater, and it's something we probably want to be able to distinguish from actual education.
> But if they aren't actually improving and learning, their satisfaction and desire to continue with what they were getting isn't desire to keep improving and learning.
good for them.
In which way this is an obstacle for those who want to?
>This is not the issue, this is the root cause of the issue.
>You DON'T measure knowledge.
>You should measure the satisfaction of the students.
>Because the most valuable asset a developed country needs to protect is the will of the members of their society to keep improving and learning.
What is satisfaction going to get you? As a student, I would have been very satisfied to have great marks while enjoying each night of the week, unfortunately I had to work and skip parties.
"You should measure the satisfaction of the students"
OK. Then how do you measure competency? Right now, a medical diploma indicates that the person took all the requisites and passed all the tests to be a practicing physician. If you only measure student satisfaction, how do you which medical student is ready to treat real patients and which isn't?
> Right now, a medical diploma indicates that the person took all the requisites and passed all the tests to be a practicing physician.
exactly! because it is required by regulations.
> If you only measure student satisfaction, how do you which medical student is ready to treat real patients and which isn't?
there is a high chance than an unsatisfied medical student is gonna be an equally unsatisfied doctor, even if they check all the boxes.
let's be clear: satisfaction is not a measure of how much they are having fun.
just like if you go to the gym you're not more satisfied if they give you free candies and hot dogs and couches with Netflix, but you end up being fatter and less fit than before.
> ...input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.
If people within the education system are getting upset that the people who are supposed to benefit from the education and who are paying an enormous sum of money in order to obtain the education dare to have an opinion about the education, I'd say that's a pretty good indication of the problems with the system. I can't think of any other area where there's anger at customers voicing their opinion. Institutions with that kind of attitude probably wouldn't last long if the education system was opened up and students were actually given some choice (say, by separating education and credentialing).
That last paragraph shows the real issue: that schooling is government controlled and provided.
The only people that ought to be involved are the teachers (and other school employees) the students (and their parents) of that school.
The fact the it might take ‘5 election cycles’ to see a reform through, is a disservice to the students, and often frustrating for the teachers as well.
If government does it, that literally opens the door for everyone else to be involved, muddying otherwise clear waters.
And why is it necessary for education to be ‘free’ or universal for that matter?
People are different, doesn’t mean we all need to learn the same exact things in the same exact way to be productive members of society. Not that such a goal is ever realistically achievable.
I would argue that the presence of a market structure would encourage schools to compete and thus drive educational advances that would eventually be used in all schools. In this way even parents who just choose the closest school without looking at the schools testing history or teaching approach are more likely to have better outcomes when competition is stronger.
1. How does the "government mandated curriculum" get enforced?
2. What are the barriers to entry and the fungibility of the educational market? If the educational market isn't truly a free market, then what's the point? More private monopolies and oligopolies without proper oversight?
1.Through standardized testing to ensure the students are actually learning the mandated curriculum.
2. Barries to entry should be low. Maybe teachers must simply be able to pass the standardized tests themselves? I'm not sure what you mean by fungibility here? And I think this system would reduce monopolies in education since schools could choose whatever methods parents preferred, which encourages different approaches. Also the monopolies of the current system, government education departments and religious institutions, would be financially penalized if they underperformed and parents choose alternatives.
The real question is even higher.
Why should we measure knowledge?
If it's learning for the joy of learning you don't need a test.
If it's to get a piece of paper you need for a job, then schools are just shitty interviews mostly uncorrelated to real world tasks.
I think we should move to learning for the sake of learning (free, open door lessons or pick your own on the internet at your own time - no frontal lessons but still provide a space for students to socialise) and give the chance to students to work on projects that can prove they know something. Workplaces can look at these projects and find someone who fit with them.
You built a robot? I can reasonably expect you to know something about electrical engineering and math.
why do we need to "measure knowledge"? school should be to teach knowlege. Measuring is not our problem. It is the problem of the employers. Test the teachers not the students. We only need to make sure the teachers are of good quality, not the students.
Because knowledge is currency. It opens doors to privilege and status in society. It also ensures incompetent people are not put into positions where they can do harm.
You mean measured knowledge acquired through a every specific way is currency. Someone who acquired the same knowledge on his own will be cast aside until he gets his certificate.
That's because the certificate is the value, not the knowledge itself. The knowledge is assumed. Without a certificate, the onus of verifying the required knowledge is now on the consumer or employer, and unsurprisingly neither of them want that, so of course knowledge combined with a respected certificate is worth more than just the knowledge itself.
That's not possible because of regulation. For occupations that don't relate to other people's life or death situations, or security in general, it's reasonable to assume if someone has a skill, they should be able to use it professionally regardless of how they achieved it.
> if someone has a skill, they should be able to use it professionally
Sure, but why would you hire an unlicensed electrician, or surgeon, or car mechanic, or builder, or elevator mechanic, or really anything that matters?
The only areas where this point becomes moot are in areas where certifications already are not an issue, i.e. in jobs that almost anyone can do.
>Sure, but why would you hire an unlicensed electrician, or surgeon, or car mechanic, or builder, or elevator mechanic, or really anything that matters?
You don't need a licensing system, you just need a reputation system. Like how bonds have a rating system; nobody's stopping you buying a junk bond, but the system makes it clear to you that it's a got a high probability of default.
And it's not even universally true, if you know American English it is perfectly useless in rural China or Japan or Central Africa.
EDIT: I'll add another example that won't upset the American audience.
Numbers in French.
We are used to the decimal system, it won't work in France, they count numbers using the vigesimal system.
So 84, 80 + 4 is quatre-vingt-quatre, 4 x 20 + 4.
My way of counting numbers, which is a basic requirements for kids aged 5, is completely useless in France, even though France is a close Neighbour of my country and we dealt with each other since the dawn of history.
Assume for a second that your goal is to teach knowledge, as you say.
How are you telling whether you are successful at that?
Even if you do not care about the personal individual achievement level (or whatever) of the students, you still need to be able to measure to understand where you are successfully teaching, and not, so that you can change/improve/etc teaching.
As the comment you replied to said, you can't just wish these problems away, and they are not easy things.
The overall thing is not a problem, they are systems.
They can't be "solved" through simplistic answers
Classic "tell me you have never been a teacher without telling me you have never been a teacher". Are there bad teachers? Of course. There are bad employees in all industries. But teaching is an extremely difficult job that underpays so most people are in the profession because they want to help kids.
Some things to understand about teaching. You must always teach at the middle kid in terms of ability and intelligence. By default this already means that some kids will be lost and some kids will be bored. This is made worse by conflating age with competence. Additionally, teachers have no understanding of what the kids are going through at home. Say you have a kid that never does homework? Is that the teacher's fault? Is it because the kid is lazy and just plays fortnight at home? Or is it because the parent's only job is night shift and the kids is a de facto parent watching two other kids? Or is it because the parent has a substance abuse problem and the kid hides out at playgrounds until late at night after everyone is passed out and it is safe to come "home"? Statistically, kids with problems at home also tend to be lower on competence scales. The real problem here is social help for the parents but we don't have the political will for this. Do you have any idea how often a teacher has had a student's parent come to a conference to discuss concerns about the kid falling behind just to be told that "It is YOUR job to teach my kid, not mine!" Tell me how testing teachers fixes that? And these are the same teachers that must buy paper/pens/supplies with the other salary because we ration school supplies.
We have some similar problems at the collegiate level. I worked full time while carrying 12+ credits paying my own way through college. I had to cut corners and ration my time. This meant lower grades for some classes but luckily I have the aptitude to get away with it. We also are sending kids to college that shouldn't be there. They don't have a real desire for a professional career outside of something like Social Media Manager. Of course they are going to cheat and use all the tools they have at their disposal having grown up digital. They aren't interested in the subject matter they just want to check the boxes and get through it. There is an issue here that needs solved at the institution level that kids will always be better at tech than the teachers but that is silly to lay at the feet of the teachers. In the end they are trying to lay a foundation of knowledge but the students have to care. Most college classes don't take attendance, is that the teacher's fault too?
Having the best software engineers doesn't mean anyone will use the product. Having the best doctors doesn't mean patients will do what they are told. Having the best trainers doesn't mean people will workout on their own. Having the best therapists doesn't mean anyone will use the techniques suggested in their daily lives.
I have degrees in math and physics. Those degrees gave me close to zero value in the market. Spending X years in school, then having to prove to an employer that you can do barely more than squat is a familiar experience in a lot of fields.
There are tests you can take in those subjects, such as the Graduate Record Exam. Those tests work to some extent because the subject matter is relatively mature, and consistent from one college to another. And yet there are entire fields of math and physics that I've never been exposed to. Their main purpose is to see if you're conversant in a body of knowledge that would prepare you for typical graduate study, not for a job.
Software engineering is a comparatively young field, with less standardization. There are even debates on HN as to whether software engineering is a real thing. There are places where every programmer has the title "engineer" regardless of their background.
I'm only employable because most people hate math and physics so much that they're relieved if anybody offers to do those things for them. That, and I'm pretty good at programming and electronics.
> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Cynically, because they're part of the problem.
Personally I don't think the "obviously bad thing" is the current state of testing. I'd instead say that the problem is the intermingling of education and credentials.
Society doesn't care about education, however all of the inspiration is geared around it. So you wind up with the case that students either become disillusioned with the system after realizing it's ripe with hypocrisy, or otherwise structured in a way that creates resentment in the system.
As an anecdote, a friend I went to high school with dropped out when he was disallowed to participate in the school band, because it was his only source of motivation to show up every day. I don't think that was the intended goal, but when faced with the reality of the situation, the system is unbending.
I think as a result, the ones you see succeeding in college tend to be more driven by either ambition or obligation rather than any actual desire to learn. So in that respect, I think colleges are self selecting for students that are more willing to think cheating is a good idea. And in many respects they may not be wrong.
> Curious, isn’t it, how all these systems seem to fail in the same way?
No...the systems haven't evolved independently. It's no more surprising to me than learning that felines could get covid.
For what it's worth, I'm a community college dropout. The education and mental health systems were absolutely structured in a way where my severe adhd (and its best friends anxiety and depression) went undiagnosed and untreated through my senior year of high school. I loved learning, but there wasn't any way to get an education that could present the coursework in a way that could keep me engaged. And of course my inability to do homework was continually met with being told it was some personal failure on my part and I should just apply myself.
So back to
> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
_They're the ones that've made me think we'd be better of scrapping the system and starting over._
I taught high school math for 2 years. Which really isn't really enough to diagnose (much less solve) the system's problems. But it did give me a sense of how intractable the problem is.
I find it enormously frustrating when people (not you) complain about "teaching to the test." Teaching to the test is good pedagogy! First, determine what you want to students to know/do. Then choose how you're going to assess their knowledge/ability. Then design instruction that prepares them for the assessment. This is called backwards design.
I assumed you would say that the programmers should start working for the school system but your final description.of problems is not difficult to solve.
The teachers are the problem.
After all I was also sitting in school for 13 years
> In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?
I would actually focus on the question of "Why do I need to quantify everybody's knowledge at a high resolution?"
When I was TAing, I held the position - never accepted I should say - that we should make more courses pass/fail; and instead of investing effort in the numeric grading keys, try to give more meaningful feedback on assignments.
Some alternative suggestions I brought up:
* I suggested that the final grade be a combination of the assessment a roll of 1D6 points - to hammer it in that the grading is to a great extent artificial. Somehow this was even less popular of a suggestion...
* I once proposed we offer people a perfect passing grade if they just never show up to class nor submit anything, and only people who want to learn would risk an imperfect grade. I really liked that proposal, because it put the two motivations - learning and making the grade - which are often conflated, at direct odds with each other.
Of course none of this was taken seriously - even though I was serious. Kind of.
> So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Oh please. Teachers advocate for themselves. Academics are currently waging a war against standardized testing for ideological reasons. Instead of a polemic against people giving their opinions please just tell us what you think.
For my money, the problem with education is that we decided it's not about knowlege but rather increasing the socioeconomic position of participants. From this it follows that everyone needs a 4 year degree. Education will only function when it's a small number of weirdos who want to be there.
Solve the problem by attacking credentialism, reforming student loans, and bolstering alternative post-secondary education (trade schools, bootcamps).
>In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?
This sounds like an entirely different question. When you have a method for testing, you have at least two different measures of effectiveness:
- how well the test measures knowledge when it is taken honestly
- how likely is the test to be taken honestly vs. subverted
I thought this thread was about the second question, but you seem to be focused on the first. But these problems require different kinds of solutions, and crucially, it is much easier (but still not easy) to verify success or failure in addressing the second question (cheating) than the first (predictivity).
This is true for so many other aspects of life as well. Things are the way they are for a reason. Not understanding the deep and complex factors that got the system to where it is dooms you to repeat the mistakes of the past. This is why I go for depth on what I complain and ideate on rather than breadth. Dive deep on something you care about instead of having an opinion about everything. Humans have done well with specialization. If you enjoy breadth, go for it. I just don't see it as very effective.
I suppose that complaining without proposing solutions is akin to protesting. You may not necessarily know what you want specifically, but you don't want the current system.
To me measuring knowledge is a minor reason of having tests. The main reason is to force students to study. If you have no test on knowledge most people will just gloss over the detail and not learn.
This question has no meaning unless you specify what the goal of the measurement is. There are two main options.
1. Measurement as part of education process — for the sake of both teacher and the student.
2. Measurement as part of external qualifications — for the people who would later use the credentials achieved in measurement to accept you to higher education and to extend job offers.
Most of the problems with different measurement strategies happens because people conflate the two.
> These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.
It's kind of hard to believe this needs to be said, because it is so obviously correct.
The petit-bourgeoisie elegies here are ridiculous.
You take the higher moral ground by establishing that you have "studied pedagogy and know all about it" and then you proceed with providing cliché points on how education has failed.
How can you apply critisism to a system you have been indoctrinated by? How fruitful is it gonna be?
i’m not so sure we _can_ fix it or even _should_ fix it. in my opinion, fixing implies a standard of perfection. it’s an imperfect system, formulated by imperfect people - the types we’re going to meet and interact with for the rest of our lives. there are always going to be imperfect ways of measuring the “goal”, be it content domain knowledge, or project completion kpi, or something else.
the positives of an imperfect system that i can think of off the top of my head are that they give teachers the ability and motivation to find creative ways to impart information and knowledge, and it can implicitly educate pupils in how to navigate complex, broken systems.
teachers who come up with novel educational methods are generally heralded for their innovations, but there’s not much else to incentivize them to remain or continue to innovate. not to mention the fact that those innovations may be expressions of their personalities and not an actual template for how every teacher should teach.
the same seems true for students. they find adaptations for navigating those broken systems. some will fall into the stream of the system, play the game, and get high marks. what have they learned? i’d say they have learned a fair amount. some will discover a need to collaborate survive and they have learned about themselves. some will complain and resist, but pass based on raw willpower or charm or something else. some will fail but will see gaping holes in the system to explore, exploit, or fill. they’ve also learned.
these are just a few of the dimensions i can think of off the top of my head. i believe that the primary way that we should seek to reform or improve educational systems is through how we treat the educational infrastructure (teachers, staff, materials, services) and the students who are failing to engage the experience due to factors beyond their control (mental health, SES, etc.)
> Cynically, this will never happen because reforms to battle educational issues in any democratic society usually takes more than 5 election cycles to show obvious results (and when the bad results start stacking up current leaders will take the flak regardless).
Well obviously we need to fix the system of the system!
It is perhaps unreasonable for the purveyor of a critical service to demand that they be the only one who is allowed to understand or validate the quality of the service on offer, and to insist that the customer is too naive to be permitted a viewpoint.
Finland is a great example of a world class school system that doesn’t measure “knowledge”. So perhaps trying to measure “knowledge” is the real problem?
I don't think the problem is that complicated - you just can't measure knowledge with a process (or a machine). Only a human can approximate another human's level of understanding.
Trying to create a knowledge factory seems to me a pipe dream. All cheating comes from trying to force learning into a rigid mechanical box.
Solution? My opinion - remove colleges, bring back guilds.
Of course, this is an oversimplification, but the moment you remove the need to print out diplomas, everything does become simpler. The "measuring understanding mechanically at scale" is the hardest problem.
Again, it’s so easy to criticize, point out the “problem” and then offer no solution.
The “knowledge factory” exists for a reason. The way our society is constructed, we need structured specialization (pick a course), verification (you’re OK) and rating (you’re the top 5%) — because our entire society expects these things to work and be available.
It sounds like the only difference in your example is that these things exist but are not centrally verified to be identical, because apparently diplomas themselves are the problem.
People need to be able to improve, be excluded when incompetent and rewarded when excellent, because that is how our society works in all other aspects, and the one thing that will always be true about an educational system is that it will mirror society: and if it doesn’t currently it will in a few decades.
You cannot suggest fundamental changes to an educational system without more or less advocating a revolution in society. No wonder most complaints stop at the problem and never continue to proposed solutions.
> It sounds like the only difference in your example is that these things exist but are not centrally verified to be identical, because apparently diplomas themselves are the problem.
Yes, exactly.
You are using a diploma as an indicator of knowledge, and you have a diploma-giving machine (university) that gives the diploma to anyone that can pass a test. People cheat the test in order to get the diploma. It will always happen, no matter how intricate you make your tests, because you cannot automate knowledge verification.
It isn't a "problem" - it's an "impossibility". And it's one impossibility software companies have gradually started dealing with - most don't care about diplomas anymore, because the correlation between knowledge and having a diploma gets lesser and lesser the more diplomas are printed artificially.
So, that is a pretty good solution, too. Remove diplomas altogether, and let employers measure knowledge in a way they seem fit. They will have responsibility for the mistakes of an employee, so it makes sense that they make the criteria. That way, educational institutions will have to make their education useful in the real world, or their reputation will crumble.
> So, that is a pretty good solution, too. Remove diplomas altogether, and let employers measure knowledge in a way they seem fit.
It's not a "pretty good solution", which shows if you start breaking down how you would attempt to achieve this. How do you, first of all, "remove diplomas"? Do you suggest that we fundamentally overturn how our entire society works just to remove cheating?
A "diploma", "certification" or whatever you want to call it, can be issued by many different entities: a collection of nation-states, a state itself, non- and for-profit organizations, even individuals. These all have varying degrees of value depending on the trust placed in the issuing body, from a certificate of having completed Bob's Weekend Sales Course to a state-issued certificate to perform a specific type of surgery.
First of all, which of these are you saying should be "removed"? Only the ones from universities? All of them?
Secondly, how do you remove them? Do you outlaw them?
Thirdly, what happens when they're all gone? How do you certify a surgeon?
Very simple - just stop giving them to people and let them be forgotten as a concept.
> Do you suggest that we fundamentally overturn how our entire society works just to remove cheating?
Are you suggesting that university diplomas are a fundamental factor of how our society works? If so, would you please explain how?
> First of all, which of these are you saying should be "removed"? Only the ones from universities? All of them?
Only those from the universities. Universities are officially recognized "places to learn" and they should be kept that way. Studying for longer than "appropriate" number of years should not be frowned upon, but encouraged. The whole process of "verification" should be completely independent of learning.
Bundling "learning" and "verification", the way universities do, inevitably leads to hordes of people who want verification, but not learning - i.e. cheaters.
> Secondly, how do you remove them? Do you outlaw them?
Very simple - stop issuing them (I suppose a country-level ban of university diplomas could do, but politics depend on the country so I can't give you a general answer). If you want a certificate authority, make a certificate authority and make it its sole purpose to verify that people have the knowledge. Leave university out of that.
> Thirdly, what happens when they're all gone? How do you certify a surgeon?
A certificate authority (that only verifies surgeons' skills, and doesn't bundle the process with "learning").
The proposal on the central verification authority is actually how pilot licenses are given. The flight schools have no legal power to exam. For example theoretical exams are done by the aeronautical authority of the country which itself only doles out exams with question from a standardized set of questions. Pilot training was for me how education should be. I was not the best pilot but I clearly now see how that was both my fault as well as not my calling.
Also in Portugal and in Poland there is a thing called national exam which is a nationwide exam at certain check points. It is very useful in showing per school social economic issues as well as grade inflations(school grade VS national exam grade). I honestly do not understand why the verification is not done independently of the teaching in all levels of education. It would also liberate teachers to focus on teaching while having a nationwide benchmark validation on their approach to teaching. Teachers ironically hate to be evaluated.
Another factor that is very hard to handle is that education is an industry that employs a very powerful class, the teachers. More often than not when education is in the news it is for teacher' labor issues. In Portugal recently they held a strike on national exam days. To give you an idea of how important national exams are, they are held in police stations and delivered by police officers to the schools on exam day to avoid leaks and so nobody has unfair advantages.
I will never forget that teachers held students national exams hostage for their negotiations. (they lost and suffered such a public backlash that their bargaining power was neutered for a few years).
So you’ve come full circle here, though. What’s the difference between trying to have people not cheat the test at university and not cheat the test at the certificate authority? I was with you until you brought in this part, because it’s literally the same thing now but at a different building, basically.
Employer verification made sense, you mention they have to deal with it if their hire is dumb. This secondary certificate authority idea undermines your entire argument though. Maybe I’m missing something though and you have a good idea for how the CA will mitigate cheating that a university can’t do.
> So you’ve come full circle here, though. What’s the difference between trying to have people not cheat the test at university and not cheat the test at the certificate authority?
Because certificate authority will be separated from teaching, its sole purpose would be to prevent cheating, and they will be able to focus on it completely. Currently, universities don't have much incentive to focus on preventing cheating, because of overworked professors, or simply because they must print X diplomas a year or disappear.
Besides, when certificate authority has a sole purpose of verifying knowledge, it will become obvious which certificate authority allows cheating - people certified by it will fail at their jobs, thus ruining its reputation.
> This secondary certificate authority idea undermines your entire argument though. Maybe I’m missing something though and you have a good idea for how the CA will mitigate cheating that a university can’t do.
The difference is subtle but important - currently, universities don't suffer much reputational damage from cheaters, because the testing aspect of university is interleaved with the learning aspect - so if a university has good learning opportunities, nobody cares if a certain percentage of its diploma-holders are cheaters. They are on their own.
With certificate authorities, cheating will be naturally devastating, because the certificate authority will (I assume) serve just as a filter for employers - employers will choose employees which hold certificates from a trusted authority, i.e. the one that doesn't let people cheat. So there will be natural incentive to prevent cheating, and the free market will do its thing.
> A certificate authority (that only verifies surgeons' skills, and doesn't bundle the process with "learning").
It seems like you've just moved the cheating problem from one organization to a different organization? How would this new certificate authority measure learning better than universities are currently measuring learning? Are there examples of such certificate authorities existing now? Do they also have cheating problems?
> It seems like you've just moved the cheating problem from one organization to a different organization?
While that might seem redundant, keep in mind that most of the cheating happens because responsibility for both teaching and testing falls on the professor of the subject, and there is not much incentive to prevent cheating when the school must print X diplomas a year or disappear. I assume that an organization whose sole purpose is to certificate knowledge can be much more specialized for testing and spend most of their time trying to combat cheating.
> Are there examples of such certificate authorities existing now?
By removing any kind of formal authority by them. They should be places to learn, not a bureaucratic machine for deciding who is "worthy" and who is not.
There could be another organization that specializes in knowledge verification and certification. But that should be completely independent of learning.
Of course, now the question becomes "how do we prevent a verification and certification authority from abusing their power?" - but that question is not particular to this context and can be applied to any human organization of any kind. However, in this particular case, I think that employers would be the verification authorities themselves. They are the ones that need the real-world knowledge, so they should be the ones to measure it.
Each student would be encouraged to serve as mentor for up to 3 younger students, starting on year 3 of their studies, and each semester thereafter, and receive some small stipend for each mentee.
At the start of each semester, prospective mentors would be listed and students would be allowed to seek them out. Both parties would be allowed to know the grades of each other, and the mentees would be allowed to reach out to former mentees of the same mentor. Mentors would also be allowed to do some kind of self-promotion where they could "sell" their abilities as mentors.
After each exam, the mentoring would count as having taken a course for the mentor, with the grade equal to the average grade of their students, and it would provide a numbe of "mentoring credits" equal to the number of students passing. This might seem unfair, but the idea is that this would encourage competition among mentors to "catch" the best students, encouraging the mentor to put effort in.
For the next semester/course, new mentor student connections could be set up, or the same as the semester before could be kept, if both sides agreed.
When a student receives their final diploma, all the mentoring results would be listed, both courses, average grade of students, and number of students, as well as total students*courses mentored and the average grade for those.
I can imagine a lot of employers would be highly interested in this information, as it could be extremely predictive for some kinds of positions (in particular positions of leadership or teaching). Students who had repeatedly mentored other students who achieved great results would be likely to, in the future, be able to recruit and keep high quality employees and help maximize the output of a team. In both cases, it might be it would be the very students they had been mentoring that would be potential hires.
Or, if employed by a university, would be likely to attract high quality post-graduate students as well be effective supervisors for them.
Now shy, intravert or people with bad social skills might find this unfair. But I think there would still be room for people who focused exclusively on learning the subjects themselves, and these would have more time available for that. These might also care less about those jobs where mentoring success would be seen as crucial.
This is what we do in companies with “train-the-trainer” and with satellite/ambassador engineers/teams.
The only thing I know from education that does something similar is “Jena-plan” in elementary school and Teaching Assistants in uni. Nothing in our high school.
A minor critique and support for what you are saying.
The experts of pedagogy (academics and teachers) are rarely digitally literate (they can’t use technology in any competent, engaging or engaging enough way) enough compared to their students. This line was crossed about 15 years ago with most institutions still digitally functioning like they are in 2005.
Some neat studies out there about this. Profs are smart and know what they don’t know. Academic leadership often doesn’t have the will to modernize. We saw how many colleges resisted modernizing during the pandemic lockdowns and when they came back lamented how poor online was after designing a solution in 2 weeks.
Bureaucracies serve ultimately for their own self preservation.
Pedagogy is a red flag word. People who use it incorrectly often discount themselves pretty quickly. Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.
It’s pretty telling when educational conferences how pedagogy is used every 3 words when… How adults learn (andragogy) including young adults is quite different than children.
When I hear the word pedagogy referring beyond high school it’s tell take sign of people using buzzwords, and a sign they might not really know the difference.
If experts really wanted to get into digital learning taxonomies based on old ones that don’t seem to bridge the divide, maybe that would be a start.
Instead academics have insisted on sharding the digital learning experience among dozens of digital tools for students ( how many different things do students have to log into), perhaps so it will not challenge their job security. Ironically most institutions have streamlined enrolment systems that are pretty complex but can take your money smoothly.
I think part of this is because too many academics are poor listeners though to openly entertaining ideas and positions that are not their own. There are amazing academics who get all of this and more (including the solutions) but they are often buried in the toxic cultures present at most post secondaries.
In Academia, afoption of ideas is gradual and slow, and too slow for the rapid changes taking place the past 2-3 years in society.
Another observation is that most universities only teach how to teach children, and churn out teachers. Maybe it’s why the word pedagogy is so present in academic circles. Do universities have a 4 year degree to teach university students like they do for K-12?
In this post - just reading about how WhatsApp and Google docs is used to learn together, for better and worse .. was created by students, not experts. Good on the prof for finding more ways to engage with the material. It’s a big problem.
Institutions have not kept up the skills of their staff. It’s decades behind. Probably another 10 years before the folks at the top wanting to keep things familiar enough start to retire and digitally native geriatric millennials can start getting into those roles and help change.
You have a good point about election cycles but it can go both ways to hurt education and curriculum too if that’s what politicians want.
Looking at students, Covid forced 10-15 years of change to happen in 2, while we have students who have missed a big chunk of education. There’s a need for our leaders, experts and institutions to recognize and do something about this, but bureaucracies ultimately serve their own self preservation at all costs.
One choice is expend all this effort fix the old institutions, or put the same effort into building new institutions for the future.
Education is no longer measured by hours of butts in seats. Education is no longer math not changing for 500 years so curriculum can take a few years to do minor tweaks.
It’s interesting to see how much more society has opened up to taking courses from anyone to learn the beginnings of any topic. If you ask me the clock is ticking on academic brands if they can’t create and revise curriculum faster than the 1-3 years it can take to approve and change a single sentence in a course.
If it’s relevant, I’ve built platforms to deliver online K-12, post-secondary education and industry training certification for a unusually long time. It feels like a weird world sometimes with the lens still stuck creating ad delivering education like it’s designed to be stored on encyclopedia CDs.
Meanwhile, Industry is often having to fill its own gaps to build the skills and competencies they need in people because education isn’t turning out people that are needed. The advances they put in place to keep their people safe shouldn’t be discounted.
> Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.
It's always amusing to see lectures on correct usage from people who don't know the difference between etymology and meaning. (And also who don't know the etymology, either, since etymologically, pedagogy isn't “how children learn” but more like “the act of leading children”.)
In English, especially American English, “Andragogy” is mostly used in relation to a particular theory/approach to adult education originating with Martin Knowles, who leveraged the same conflation of etymology and meaning—even when it originated, pedagogy was well established with its modern and more general meaning despite the narrower sense of its Greek roots—to promote it; education for different audiences by age or other circumstance is not generally distinguished by different greek-root terms in English, but by English terms [“early childhood education”, “adult education”, “continuing professional education”, etc.)
> Pedagogy is a red flag word. People who use it incorrectly often discount themselves pretty quickly. Pedagogy is about how children learn, it has less relevance in higher education, which is more about learning how to learn. Andragogy is how adults learn and I invite anyone to see how often that word is used.
The original root of "pedagogy" relates to children, but I still have a lot more time for someone who uses that industry-standard term than I do for anyone preaching 'andragogy'. It's a niche, culturally-bound and assumption-based theory with little research to actually support any of its claims.
Despite the grandiose name, 'andragogy' is just another 'learning styles' or 'growth mindset' - it's pop psychology designed to sell training courses.
I TA'd for a Prolog course at my university (Imperial College London) during the four years of my PhD there. As part of that work I helped correct students' papers. It was pretty clear to me that the students were sharing their code and only changing variable names etc. to make it look different.
It didn't work, because you could see the same, let's say, idiosyncracies, in their code. For example, there might be three or four different ways to solve a coding exercise and about 60-75% of the papers would solve it using the same way, which was not necessarily the best, or even the most obvious, way (what is the most obvious way to solve a Prolog exercise might not be common sense, but that's why you are given a lecture, first, and then the exercise).
What's most interesting is that I saw the same patterns repeated over the three years I TA'd for one of the Prolog courses. I guess they shared the answers between years, or somebody had put them online (I searched but couldn't find them). Or they just copied solutions to similar exercises they found online.
I didn't report the cheating because I felt there was no benefit in doing so. In particular with Prolog, because it's not a language commonly used in the industry, and it's taught at Imperial mainly for historical reasons (there are many of us logicists studying, or teaching, there) I reckoned that most students found it a useless chore and did not understand why they needed to learn it, and why they needed to "waste" time solving those coding problems. So they copied from each other in order to get the job done quickly and then have more time to spend on the things they felt were more useful to them (like learning Python or "ML" I suppose).
I personally thought, and still think, that learning to program in Prolog is useful, just to disentangle a programmer's mind from the particularities of the coding paradigms, and the programming languages, she's most familiar with. At CS schools today, programming is introduced with Python and I guess it's easy to get into a mindframe that all programming languages must necessarily work like Python. Studying languages from different paradigms, like Prolog or Haskell, can shake you off that mentality (it sure did me, back when I did my CS degree).
The problems is that you can't really force this appreciation of the need to learn different things on students, who are often in a terrible hurry and under terrible pressure to do good on their course, so they can get on with life. The Prolog course I TA'd was mandatory and so it must have really felt like someone was trying to force the knowledge down the studnets' throats.
I don't think that's a good idea. You can't teach people that way. They'll just see your obvious effort to force them to learn what you want, and they'll simply take the obvious route around it. And that ends up teaching them a lesson that you really weren't expecting to teach: the world is full of idiots who think they can teach you things, but you know better than they do and you'll show them who's boss.
That's what students do with tests, also. They can see they're a useless waste of time and they can see the obvious way around them is to cheat, and that it's to their benefit to cheat. And so they cheat. I don't have solutions to this. The students shouldn't have to fight the school, and the school shouldn't have to fight the students. The school is there for the students' benefit after all.
> probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much
This is the story of hacker news. And of most other online forums. And of most meetings. Let’s bikeshed. Hi, I’m on the Internet and I read the whole post title. Let me share my thoughts.
People are stupid, lazy, and uninformed in the general case. People writing in an 8 line comment fields on mobile aren’t going to be the exception.
There is no need to be frustrated. Unless you enjoy that feeling. Then by all means embrace it.
Try this? Dip into the comments with right expectation. These are off the cuff uniformed thoughts of the masses. Maybe they make you laugh or cry or maybe something inspires you. Maybe there’s a gem buried here somewhere.
Don’t expect HN commenters to know anything deep. If you are in the mood for a deeper thought, go to the library.
The other problem with the "we need to fix the system" arguments is that they often ignore the much greater problems in our society.
Our schools, in the aggregate, aren't that bad. We have a broad spectrum and inequality is severe, but even in the worst-off areas, it's not the schools so much as broader social conditions that are producing lousy academic performance. If kids are getting evicted, they're not going to be able to turn homework in on time. If they're doing nothing all summer, they're going to backslide. I also question the social value of "fun" projects like dioramas in grade school: the result seems to be that middle-class kids' parents do all the work, producing adult-quality work, while the less well-off students turn in projects that looks like they were made by kids.
We have ridiculous rates of cheating because we're in a society run by people who cheated and everyone knows it. Corporates cheat; you can't become (or stay) an executive if you don't lie and backstab your way to the top. The fish is rotting from the head, and young people are extremely alienated. This doesn't justify their actions, at an individual level, but it does explain the upsettingly high rate of dishonesty we're seeing.
People also underestimate the power of peer framing and moral drift. Generally, people don't wake up one day and decide that they want to cheat their way through college like some future insurance executive. It happens over time. They start with minor offenses like lifting a sentence without attribution, or looking up one answer on a phone... but, over time, they're plagiarizing whole papers and have stopped doing the actual work... and this is when they usually get caught.
Dishonesty also goes both ways. Grading might be broken, but a world without it would be worse--removing the SAT enhances the preexisting advantages of the rich. Once people become teenagers and realize that advancement in society isn't only based on merit but also requires playing social/nonacademic games (in high school, to be popular and appear "well-rounded" to admissions committes; in college, to get laid but also to get introduced to the best companies; in the work world, to ingratiate oneself to the right people and thus climb the ranks over more deserving but less likeable peers) at which everyone cheats, because everyone has to do so... because global corporate capitalism is itself a cheating system in which most of us are predestined to lose... it becomes harder to make a moral argument to them that cheating is categorically unacceptable.
This comment is saying "No one else here knows as much about this as I do", and little else.
Your only concrete solution, produced by your years of study, seems to be "shut up and listen to educators". Well what are they saying?? How do we fix the problem of cheating and the other issues associated with measuring learning through testing?
If you have so much specialized knowledge about the problem, what does it tell you about how to fix it?
I said no such thing. I never even claimed to be a teacher (which I am not).
What I said was that we should listen to people who are active in the field for proposed solutions to problems we face. I also never said I was active in the field, in fact I said quite the opposite.
I really don’t understand why your comment is so confrontational. I never claimed to have the answers to all problems in all educational systems across the globe, I only suggested that if you want to resolve them you probably shouldn’t do it by having an open discussion involving only programmers (or any other non-pedagogical group of people for that matter).
Is this what you really got from that comment? The implicit point is that the better you want to assess knowledge the less scalable it is, eg. giving oral/one-to-one interview style assessments to university students is not feasible even if it is a better knowledge assessment.
Therefore, saying "fix the system" isn't helpful, everyone knows some fixing needs to be done but not how or even if they do, don't have the power to. Look at problems like poverty, housing supply, climate change, I can look at all these and say the system is broken
> This comment is saying "No one else here knows as much about this as I do", and little else.
No, it's not. Not at all.
But your comment is saying "How dare you suggest that we listen to experts? We all should have a say!". And that's the problem the gp post is pointing out. Do you argue like that with your doctor before surgery? With your lawyer before they defend you in court? With a chef before they cook a meal you ordered in a restaurant? No? Then why is education any different and suddenly everybody claims to know how it should work..
This happened to me in undergrad with an autograded class. I remember soloing the class and thinking that the assignments were super hard. They would post grade averages for assignments and I was doing worse than usual.
I remember a TA posting about cheating being an issue. They even released a graph of anonymized student repositories with edges indicating a detected instance of cheating
Turned out that a HUGE cohort of people were cheating (I think maybe over half the class).
The scariest thing about cheating is that whenever a bunch of people do it (and aren't caught), it screws up the class curve so much that people who don't cheat will be forced to put in way more time studying, which will then take time away from other classes. It also screws up metrics that the professors and TAs use to understand how well they're teaching material, which assignments to drop, etc
imo this is why people shouldn't cheat. If nobody cheats, the grades might on average be lower but once the class is curved or assignments are dropped it will be a fair indicator of where everyone is at. If people cheat, it screws up the fairness and can encourage others to start cheating. If everyone (or most) are cheating, you have people who aren't getting anything out of the class, getting credits as prerequisite that they shouldn't be getting, moving on to future classes and continuing the cycle
Yes - it's similar to the situation in high school where there are kids who want to learn and kids who disrupt the class because they're lazy/not motivated/have issues/want attention.
Automated teaching and testing with social sharing automate that dynamic. There's less attention seeking, but much more passive aggressive subversion.
But that's not the scariest thing. The scary thing is that if it's a STEM field these students go on to get jobs in which they have no competence. This is truly catastrophic if you want software that works and buildings that don't collapse.
Worse - the skill they've learned best is gaming the system and hiding their incompetence.
It's a double failure - of culture as well as knowledge.
Kudos to the prof in the story for handling it so well. Most profs won't.
The underlying issue is that there's been far too little research into the social consequences of automating all kinds of interactions.
The 70s utopian ideal of "Give everyone a computer to empower them" turned out to be ridiculously naive. What happened instead is that various dysfunctional economic and cultural patterns were automated and enhanced.
Culture as a whole has no defences against this because hardly anyone has realised that it's a problem inherent within the culture-amplifying effects of automation, and not an unfortunate byproduct that just sort of happens sometimes - and who knows why?
> But that's not the scariest thing. The scary thing is that if it's a STEM field these students go on to get jobs in which they have no competence. This is truly catastrophic if you want software that works and buildings that don't collapse.
And also why we can’t trust a degree to show competence, leaving it to companies to figure out with LONNGG multi-part interviews
I really appreciate this comment, especially the bit about how automation amplifies culture - that's something I've felt for a long time but you've stated it eloquently.
Schools want to churn out more students, industry wants more fresh grads.
Online quizzes/assignments (which are vulnerable to cheating) and Leetcode screener questions (which are just a little better than rote memorization) are how schools and industry react to scaling issues.
I feel like everyone would have better outcomes if we could somehow be satisfied with less growth.
Another thing (that the story goes into in depth) is that cheating creates a whole lot of extra work and stress for professors who would rather just be teaching material and not exerting enormous effort into policing other people's behavior and enforcing rules. (And that's really hard to do if you're afraid of making a mistake somewhere and punishing someone too harshly, or for something they didn't do.)
The students hunting for "the snitch" adds another layer of dysfunction. If someone joins the group chat and leaves because they notice other people cheating, then they could become targets of the other students. That's not the sort of college experience anyone wants to have.
>imo this is why people shouldn't cheat. If nobody cheats, the grades might on average be lower but once the class is curved or assignments are dropped it will be a fair indicator of where everyone is at.
This is an issue that makes me feel conflicted in the case where there are already a lot of people cheating. If there's already a lot of people cheating, it doesn't make practical sense not to cheat, you're really just putting yourself at a big unfair disadvantage, and making an inefficient use of energy that could be used else where. It's unethical to join in the cheating, but a situation like that feels like there's just a lot of arguments to cheat. I think making the best of that situation would be to participate in the cheating but also make the best effort to understand the material as opposed to leveraging cheating to min-max on effort-grade.
On the other hand, all the effort could pay off in an unexpected way down the line because all the cheaters pushed you to achieve more than you would have normally, plus the ethical implications.
Full disclosure, I did have a situation where cheating like that happened, and I did take it. It was for a pretty irrelevant course, and I don't feel bad at all about it. I also haven't made much use of the course material afterwards.
In the best case, you get nothing. Most probably half of the students turn against you. And possibly the teacher himself takes revenge on you for snitching. Some teachers are of the opinion that “snitches get stitches”, often as a way to cope with their own lack of teaching, and sometimes they see it as a good life lesson for the student.
So report the teacher. The whole snitches get stitches thing needs to stop. We aren’t in a goddamned prison yard. And if someone actually threatens you, call the police. Raise hell about it. Gangster, prison yard culture needs to die. Cheating is never ok. Cheaters should be thrown out of school with zero second chances. People in the US are often going into debt for tens and even hundreds of thousands of dollars for college — when people cheat, that diminishes the value of that extraordinary expense. Not to mention honor and integrity ought to matter.
When I was in high school, a bunch of AP students cheated off the valedictorian our senior year...when it was exposed, they hushed everything up rather than allowing a scandal that would ruin the school's and the students' reputations (some of which were bound for Ivy League schools in a few months' time).
Sometimes, the entire community will protect its cheaters.
As Electrical engineering TAs in the late 80's, we knew who was copying class homework assignments from their classmates based on transcription errors in the handwritten work that was turned in. Since the class professor couldn't care less about the cheating, we would consistently score the source assignments a few points less than the ones that had copied the assignment. We kept this up for the entire semester.
Option C, help the others students that cheat, let them copy your work, or even help them directly with the work. (Especially for things like home exercises, if not actual exams).
After you graduate, you will have a network of friends that consider you a super-smart, trustworthy, loyal and friendly person.
Chances are, you will learn more than the rest, and end up with a high GPA. And some of the lost opportunity to socialize in bars and clubs, and build networks that way, will be compensated by the relationships with people you helped.
Btw, this same behavior can still work once you're in a job. If you get a reputation as someone who can provide help when people are stuck in some difficult problem, and not shaming them for it to their boss, that tends to reflect positively back to you over time.
> This happened to me in undergrad with an autograded class.
I had a entirely autograded class on my first year at uni, it was awful. Immense amount of tests each week that were so basic yet so picky about the input, you lost points for no good reason and it frustrated me so much.
Ended up making a browser extension that parsed the tests, calculated answer probabilities based on previously completed tests that had similar questions. Unless the teachers were willing to hide the final score, it figured out the correct answer to each and every question.
It ended with a large majority of the class using the extension during the final exam, they couldn't really prove anything and nobody got caught. The next year the amount of tests was reduced and the exam was on paper (I'm sorry undergrads).
I don't feel bad about it, the lecturer abused our time and resources asymmetrically, listening to feedback was years overdue. It doesn't always boil down to "omg cheating bad"
Not at university, but at a company I worked for: the company had legal requirements to train its employees and contractors in various aspects of integrity. They'd made an app that gamified this, and at the end of every month, we should have out score above 70% in that app.
A coworker used our testing framework to write an app that would collect the correct answers and fill them in automatically, and only wait for user input if it didn't recognise a question. He gave to code to me when he left. I think I tried it once or twice, but it failed to work for me due to some network issue, so I figured I'd just stick to the spirit on the thing and keep my score up manually.
I was taking an upper level marine biology course - the tests were VERY difficult, but they were take-home, do them at your leisure. I was cool with it. But walked into a coffee shop one day and found 75% of the class sharing answers and cheating anyway.
It seems very funny to me as a programmer, that what university calls "cheating" and threatens consequences, the industry calls "consulting with your colleagues" and encourages.
There is an enormous difference between the people "consulting with colleagues" and the people openly cheating. When I was doing my engineering degree there were always groups of students who would go over assignments together, see if their answers matched up, and then look through the textbooks and course materials together to see which person was right. That's consulting.
There were also people who would pay someone else to do the assignments, and then upload PDFs of completed assignments to a private chat. World of difference.
Sure, but what the other person is describing as cheating sounds, to me at least, very much like the former case. There was a task, the students were allowed to work on the task in their own time, with their own resources, and chose to put their heads together rather than work on them individually. To me, that seems fairly reasonable.
That said, a lot of this stuff seems kind of culturally specific. At least for the two degrees that I experienced in the UK, there were roughly three types of graded work: exams (overseen, sit in silence, possibly with your own notes but generally not, fixed time limit around 1-2hrs); lab-work (usually done in the building although worked on outside of lab hours as well, fixed deadline of around a week or two, usually graded at least in part based on an oral conversation with the examiner); and coursework (take-home work, fixed deadline ranging from one week to a couple of months, allowed to discuss but direct copying is banned).
In these cases, it rarely makes sense to cheat by directly sharing answers. Obviously in an exam it would be useful, but the conditions of the exam make it largely prohibitively difficult. In coursework, it's usually obvious if people are handing copies of the same work in. And for lab work, the challenge was usually not to just complete the lab and get a "right" result, but rather to understand the task and be able to explain what was going on to the TA grading you. If you can cheat well enough to pass that, you probably understand what's going on - which is exactly the aim of the course anyway.
Whereas it feels like what people are talking about here is students being sent home with problem worksheets, being expected not to talk about them at all, and then getting grades based on those answers. That seems to me to be a system that practically encourages cheating - work together as a group, and you'll obviously be able to achieve more than any one individual, even if you could all pass the course in the first place. In contrast, we also had similar worksheets, but they were never graded, and we were usually encouraged to work together to figure out what was going on. We could then take our answers to tutorials and have a discussion about where we went wrong and why we went wrong usually in a group of any four or five.
So reading this article and some of the comments here, I'm really struggling to get an image of what these teachers are expecting from their students, if they're setting problems that are so obviously gameable.
In theory the purpose of the university is to help you develop skills independently, so that you can bring something to the table when you do consult with colleagues in the workplace.
That's basically the point of university isn't it? It assumes that not everyone can be qualified to do everything and imposes restrictions on who can be hired based on their known qualifications.
I'd rather say that the equivalent of what the industry calls "consulting with your colleagues" is more a "study session" than an exam (be it take-home).
This is when you learn things, and everyone brings something to the table.
I'd say the equivalent of an exam is a job interview. When the other party wants to see what the individual knows.
Come on. In one case, you aim to prove that you have mastered an established body of knowledge. In the other case, you are trying to solve an open problem with any legitimate means available. Those are entirely different things, and conflating them is obtuse.
Except if you consult with colleagues/friends outside of your company you would get fired. You are not allowed to "consult" with anyone, so why should you be allowed to consult with anyone about your homework.
> Except if you consult with colleagues/friends outside of your company you would get fired
What the hell are you talking about? I routinely consult with others and others routinely consult with me. That's afaik standard practice in most professions: engineering, medicine, architecture, psychology, marketing, and yes software development.
You share confidential information with colleagues outside your job? I guess how much or what consulting you can do depends a lot on your job. I know quite a few companies which don't allow talking about specific problems you are facing because that would reveal information they consider confidential. Now if we are talking general questions, sure but you are very much allowed to do the same as a student. All this to show that sharing information/solutions has quite a gradient even in professional life.
Again: what the hell are you talking about? We were talking about professional consulting, not about sharing confidential information. These are two very different activities, and you can definitely do one without the other.
The statement saying "if you consult with colleagues/friends outside of your company you would get fired" is false.
We are replying to a post which said cheating in education is what is consultation in professional life. Nowhere did we talk about general "consultation" we comparing copying others solutions (cheating) to "consultation", I would argue that the equivalent of copying someone's specific solution in many (most) cases would be a violation of ethical/company/workplace rules and is not just "consultation".
Asking someone for help with homework (much more equivalent to general consultation) is not generally considered cheating in education either.
Absolutely, the equivalent of cheating is 'not just "consultation"', but in your original message you said "if you consult..." without qualification. And no, you don't get fired for "consulting" alone. Right below you already changed your statement to "they are rules in professional life about how you consult with others". Let's maybe leave it at that? This doesn't seem like a productive discussion and is entering flamewar territory.
I assumed it was clear from the parent I was replying to (like others did), that I meant my statement in the context of how they said it's equivalent to what's called cheating not a general statement about all consultation, but you're right I should have made that more clear.
You are also correct that this is not a productive discussion, because we are discussing a misunderstanding (due to me not being clear enough).
I did find your language and reaction a bit disproportionate though.
I don't think so, grandparent was definitely talking about general consultation, and on your second post you clearly say you mean that all but the most generic kind of consultation is a fireable offence. But happy that we got to a common ground.
We are all replying to a post that said what is considered cheating in education is simply consultation in professional life. I responded that they are rules in professional life about how you consult with others. In other words there are rules about consultation in both education (there is usually no issue about studying together, but there are issues about just copying somebody else's homework) and professional life (e.g. a doctor can't just share a specific medical file with anyone asking for a solution, or an engineer can't ask someone at a different company to program some program for him).
> I responded that they are rules in professional life about how you consult with others.
No you didn't, you said "if you consult with colleagues/friends outside of your company you would get fired", which is clearly false. I'm the one who said rules should be followed. But I'm happy that the mistake was corrected and we're now on the same page.
I hope you told the professor what you discovered. Allowing cheating is complicity in the cheating without the benefits. But I remember the intense social pressure - glad I’m not a kid any more.
I can imagine some people cheating not out of selfishness but just to get by. In the case of a curved class (which has its own set of ethical dilemmas), if nobody cheats then people will be OK on average. But because some people decide to cheat, it screws up the dynamic.
This isn't going to convince anyone in particular who wants to cheat, that wasn't really my intention
Your grade reflects how well you learned with respect to your peers.
Getting 80% in an exam in a weak cohort of students might earn you an A, while doing so in a strong one might get you a B instead. Is this fair toward a student getting a B this year with the same score that earned somebody an A the year earlier just because they happened to enroll with different peers?
Also why should the grade distribution of an exam be determined beforehand?
> why should the grade distribution of an exam be determined beforehand?
because we know beforehand that population density over repeated trials of some measure will fall in a normal distribution, especially when constructing such a test is more straightforward than 90% gets A, 80% gets B, etc
So, you are suggesting that if we have a class of 100 people, and we want to measure their height, we should just order them by length, and then the tallest one we call 2.1 meters, the one in the middle 1.8, and the smallest one 1.5, no matter how tall they actually are?
Wouldn't it be better if we measure their actual size?
The Polish university I went to never even heard of grading on a curve, and yet cheating was rampant. I think it's just human nature - as long as cheating is not heavily penalised, many people will choose to do it.
During my studies, there was one professor who openly said that, if he caught you cheating, he will fail you in his class (which, in Polish universities, means going through a lot of bureucracy to not have to repeat the entire year) - as opposed to other professors, who will usually just allow you another attempt at later time. Also, during the written exams, he wasn't staring longlingly at the sky throught the window (like some other professor did - I assume they wanted to help us cheat, so that we can pass their class and be out of their lives), but was watching us like a hawk 100% of the time. In result, AFAIK there was no cheating in his class at all - it just didn't pay off.
Personally, this hypocrisy and game of cat and mouse was one of the main lessons I learned in high school and later in university ("it's ok to cheat as long as you don't get caught", "nobody cares about their work anyway" etc.). It's a shame that the education system is corrupting the morals of young people in such a way, but on the other hand, the grown up world they're about to join is pretty corrupt anyway, so maybe it's actually teaching valuable survival skills.
I don't think the comment you replied to meant to say that getting rid of grading on a curve would stop cheaters, merely that it would stop some of the bad effects cheating can have on those not cheating.
This happened to me in an ML class. Somehow people went from not knowing what eigenvalues were to solving all the problems in Bishop's book correctly and exactly the way the TA wanted them. A bunch of them would go to his office hours and have him do a problem, then they would share his answers.
I didn't complain until after the term was over and was just chatting casually with him while waiting for lunch at a food truck on campus. He was completely oblivious and didn't believe it.
On the other hand you studied way harder than you normal would have, which if your goal is to learn the material, is in fact a good thing.
But yes I understand screwing up the curve has other negative effects. I think in some ways this is a tying of learning to employment and life success. People cheat because they don’t want to learn but they do want a good job. I don’t know how you separate these two things, or whether you should, but if you made the benefit of cheating simply not understanding the material it would end. In the mean time your best strategy is probably to snitch.
This is your problem right here. There is no rational argument for grading people on a class curve. That just makes the class a lottery and incentivises sabotaging for others to get an upper hand yourself.
Curve grading is such a messed up thing. It leads to professors assuming you should get all A's in all of your classes basically, cuz of this nonsense from US unis.
I did an exchange year, and sent my grades over to a professor. Guy was like "What's up with all the non As" and I had to talk about how we do things differently here (giving points for right answers, and then adding them all up).
Of course there's always an overall curving happening on a high level because teachers choose how hard to make assignments or not, and ultimately grades are not really fundamentally important, but when people just have those to judge you on and choose to, it really fucks things up.
I don't get it, don't curves limit the number of people who get an A each semester? But potentially everyone in a class that isn't curved could get an A (or F)
Well for example, in my school, basically nobody would get 90% on an exam. Very good people would get 80% but the average is more around 60%. So you stuff that into the grade conversion for foreign universities and ... well lots of people get C!
it depends on the curve. "curve" is kind of vague here, sometimes a prof will curve by adding so many percent to everyone's grade to get enough people over a certain threshold
but also the prof will know if the class is full of overachievers and curve in a way commensurate to the class
I think that curves discourage cooperation and encourage zero-sum thinking. Curves are only really necessary when the professor is out of touch with the class and/or prerequisites.
I don't think that people would typically go to the lengths of trying to sabotage someone else, in practice it could just look like a bunch of people working separately and not cooperating at all
In an ideal world, there wouldn't be any curves and the coursework would be tailored sufficiently for every cohort of students for every semester
I've always loved curved classes because they allow very good professors to give real mindfuck exam questions that cause you to walk away from the test with a new perspective on the material.
Those exams were often the most edifying couple hours of the whole semester. You also got a clear sense of the difference between you and a real master of the material which is a helpful lesson in humility.
I went back to grad school recently and it seems like that mode of testing has gone out of style in the last ten years. The exams I took were geared more towards establishing a minimum bar of competence, more for my future employers' benefit than my own.
You don't need after-the-fact curving for that, though.
When I set (free-form written, math/CS) exams, I always made a point of designing the exam in a way where I didn't really expect anybody to get more than 90% of the points. I also made sure that students knew this.
I always set grade brackets before grading (e.g., 80% of points gets you an A, below 35% is a failing grade, and so on). I always ended up with a pretty reasonable grade distribution.
they allow very good professors to give real mindfuck exam questions that cause you to walk away from the test with a new perspective on the material
I think the problem with this is, although cool, very few professors are capable of doing this well and very few students benefit from it. IOW, it doesn't scale.
To my mind, oral finals or 'discussions" would repair the cheating issues quite well. But again, it's hard to scale and professors would need to be trained in how to do them.
I took a class that gave mindfuck questions and did not need curved grading.
It was simple. The weekly assignments contained 6 questions. One of those was the mindfuck question and wasn’t graded (if you solved it you got extra credit, but hadn’t been solved so far).
You don’t even need to hide the crazy questions. Offering them as extra credit is a simple solution that works well.
I think of helping each other as part of the reason we put students in the same room to begin with. So when a student asks another for help with understanding a concept and the other refuses that request, that's working against the purposes of group education, i.e. sabotage.
But I admit I might be a bit radical here compared to most people. (Who I am sure will argue that utilising teacher time to the maximum is the only reason to gather students in one room.)
I agree with you, but also most of the time when people ask each other for help it isn't to help understand a concept. It's usually for source code.
In the class I mentioned above, the prof and TAs weren't pissed off about people using each other to understand concepts or even high level design. They were pissed because people were copying source code verbatim, comments and custom debug messages and all
On a related note, if someone ever asked me for help with a concept or high level design I would be more than happy to oblige. But (most of the time) people aren't asking for that, they want source code that they didn't contribute to
this could be fixed by making assignments collaborative, but then the professor has no means of verifying where everyone is in terms of understanding
Curve grading does have a place. But that is standardized tests or placing entire cohorts to buckets. So we are talking of hundreds if not thousands of students. For individual schools or classes inside thereof it is wrong method. Either the students know enough of the course material to pass or they do not.
So true. Curve grading works well when the cohort is large and heterogenous. (Like, say, Finnish matriculation exams. >20k people in a single cohort, and each individual exam within the whole thing with at least 5k people in them.)
As far as I'm concerned, curve grading single courses is just as bad as stack ranking at work. And in here, with high concentration of engineers who loathe stack ranking, a disproportionate fraction are somehow in favour of per-course curve grading.
Fun thing with Matriculation examination. The small subjects are not on curve. Like for example Latin... Or higher level Russian where there is enough native speakers to have an effect on it.
Whole stack ranking is just weird. Either the people are good enough to do the work or not. If not eventually fire them. On other hand at top the rewarding extra is an problem, but I don't know if you can do that without someone gaming it.
That's a good point, thank you. I wasn't aware of it, or at least didn't actively remember the fact. But it makes sense and circles on the same thing. Curve grading only makes sense when the cohort being evaluated is large and varied enough.
> Whole stack ranking is just weird. Either the people are good enough to do the work or not.
Even at the risk of veering quite far from the thread topic, I am not sure that's the only angle to look at things. In my experience, pretty much anyone who is genuinely curious and has the discipline to work at understanding how/why a ${THING} functions, is going to be good at their job. Some people just don't find what their thing is, and surprisingly many are in jobs where they are not even allowed to figure it out.
As far as I'm concerned, curve grading single
courses is just as bad as stack ranking at work.
In my years of schooling, I never saw curve grading resemble "stack ranking" whatsoever.
In stack ranking you have to designate some team members as superior and some as bad. You could have a team of ineffective idiots and be forced to label some of them as high achievers, or a team of superstars and be forced to label some of them as bad. I agree that this is manifestly ridiculous.
If you want to compare grades across semesters you have to either keep the difficulty of the tests constant or adjust the grades to compensate for how difficult the tests were relative to previous semesters.
BTW, I don't know if it's a named fallacy, but saying "there's no rational argument for X" is not a good argument against X. You can't prove a negative so you cannot know if there is not a rational argument for X.
Of course there’s a rational argument to using the distribution of scores to set the grades: If everyone in the class gets half the questions wrong on the final exam, you shouldn’t just fail the whole class! The test was probably just harder than expected
But in that same example, if many of the students get close to 100% on the final by cheating, then the remainder of the class will suffer
No, the proper response to that is still not a curve, it's to identify which block of questions wasn't appropriate and removing those from scoring or turning them into "bonus points" and similar measures.
That way you don't incur any of the (pretty severe) drawbacks of a curve but don't punish students for questions that were badly phrased or weren't properly taught.
Simply removing the offending questions after the fact doesn't solve everything. Students may have fruitlessly wasted a ton of time on those questions, causing their grade in other parts of the test to suffer.
I disgree. How hard it is to pass a class should be an absolute. It shouldn't be easier for one cohort over another just due to the first being, on average, less prepared than the second.
If an exam turns out to be too hard the teacher can investigate, consult their peers, talk to their students and decide if it's worth to scale all grades by a factor to correct.
> How hard it is to pass a class should be an absolute.
It should be, but profs and TAs change from semester to semester. The coursework changes also. Also the profs, TAs, and coursework of prerequisites change too
> It shouldn't be easier for one cohort over another just due to the first being, on average, less prepared than the second.
A good professor will have the sense of discretion necessary to know when a class is half-assing on average
A lot of my classes the average grade on a test could be as low as 30% with a P100 of 50%. The material was extraordinarily difficult. I’m glad it was because it stretched me in equally extraordinary ways to do my best. But I don’t see how they would have assigned grades without a curve. In most cases I agree with you, but in those classes I wouldn’t have changed to tests to dumb them down but I also wouldn’t have accepted score based grades. The curve felt fairly rational in the situation and I think the distribution of performance reflected a grade curve well enough. The GPA though for the department was depressed relative to others but it was also widely regarded and recognized internationally as extremely difficult.
There is no rational argument for grading people on a class curve
It's an imperfect solution for a legitimate problem.
The counterargument is that it would be rare for a prof/teacher to absolutely nail the correct difficulty for an exam.
If they feel that their students' average exam grade fails to accurately reflect how well the class is learning the material, then some kind of adjustment makes sense... imagine a scenario where you have a classroom full of motivated and engaged students who are showing good understanding of the material, and yet the average exam score is 60%. This strongly suggests professor/teacher has erred and an adjustment is in order so that the scores better reflect actual mastery of the material.
There are of course also a lot of situations where grading on a curve is blatantly unfair and/or simply makes no sense.
This is why I'm generally against grades with more than three levels.
I would like to see a system where almost all (something like 99 %?) of students just plain "pass". Then we have the lower statistical anomalies (lowest 0.5 %) and upper statistical anomalies (highest 0.5 %) which fail and get an "excellence" type grade.
If the average score is too low or too high, that doesn't mean more students fail or perform excellently, that just means the teacher needs to recalibrate.
(Of course, these percentiles ought to be measured over as inclusive a reference class as practical. Ideally across schools and years simultaneously.)
Well, people shouldn't cheat for a whole lot of reasons. But the main take away from this is, in my opinion, that people should not be "graded on a curve".
You somewhat seem to miss the wood for all the trees. The problem is the curve - grading on a curve is inherently unfair and irrelevant. Why should it affect your grade if others in the class cheat/struggle/..?
Any teacher (and school system) worth their salt has by now dropped grading on a curve.
Maybe not a literal curve, but tons of instructors will compensate if an exam ends up being too hard (or too easy). Except the main way they tell whether the test difficulty was off is by looking at the grade distribution which obviously ends up skewed if some people cheat
You should read up on Nash equilibriums. In general, the equilibrium only holds with information sharing. In the real world, winning due to a lack of information sharing is called things like "good business." Academia, and cheating in general, is rotted to the core because of ranked grading and curves. They fundamentally incentivize cheating, because of the game theoretical gain therein.
Ah cheating ! I did a year abroad in California when I was studying computer science and I remember there was a huge difference between cheating there and cheating in my French school.
In the later, we were given harder exercises and asked to deliver a working program with some constraints. This program is then tested and graded by a CI and examined by TAs. Usually TAs would get a cheating report for reused bits of code and things that would solve an exercise with techniques far away from students knowledge or forbidden functions. TAs would ask you questions about your code and trigger cheating review if you could not explain why you wrote it this way. It was usually effective for detecting people that didn't wrote their own exercises. As the exercises were harder than expected for a class and projects were long and difficult, students were encouraged to talk, discuss and exchange ideas. Ideas sure, code meh.
Then, in the US, exercises were stupid checkboxes-style questions and graded on a curve. So of course everyone "cheated".. I must confess that I did it too. It was unworthy of my time and attention, as it was just about taking the course material and regurgitate it with different words. Of course, I can't imagine anyone learning anything from this way of working.
Stupid assignments encourage students to cheat. Make them interesting and this problem will go away.
> Stupid assignments encourage students to cheat. Make them interesting and this problem will go away.
and yet
> Usually TAs would get a cheating report for reused bits of code and things that would solve an exercise with techniques far away from students knowledge or forbidden functions. TAs would ask you questions about your code and trigger cheating review if you could not explain why you wrote it this way. It was usually effective for detecting people that didn't wrote their own exercises.
So both courses have cheating, one possibly has less cheating, and that one has a more effective detection procedure, but cheating happens on both all the same. The problem does not go away with a better course but is - based on your anecdote and not from the presumably better view of the TAs - lessened somewhat.
Perhaps the same course could be run in California and would still attract the same level of cheating as the checkbox style one. Maybe it's simply that a course that appears more difficult to cheat on or is more difficult to cheat on attracts less cheating.
> The problem does not go away with a better course but is - based on your anecdote and not from the presumably better view of the TAs - lessened somewhat.
I was a TA in France, and did a bit of TA-ing unofficially in the US. So, my view is based on both experience, TA and Student.
While there was some cheating happening on both sides, it was practically non-existent in France after a few weeks of school. The abilities required to cheat and get away with it were extremely hard to master, and usually would carry much more risks than just complete the exercise. Successful cheating were, sometimes, rewarded if the problem solved was worthy enough. You could even get clearance from the professor in advance (not for copying code from StackOverflow, of course, but rather for using forbidden functions or libraries).
Finally, most important projects were done in groups of 2 to 5-6. Individual students putting a group at risk deliberately, would get caught extremely easily and carry a far worse sentence than just a bad grade: they would be excluded from other students groups and left to do their projects alone or with dropouts students. Finding back you way after this is very very difficult.
> Finally, most important projects were done in groups of 2 to 5-6
Much less reason to cheat in group work too. It's much easier to just put in minimal effort and let someone else in the group do the work. That's just another form of cheating.
In Germany we say "Team" in teamwork is an acronym for "Toll Ein Anderer Machts" (great someone else does the work) and especially in university there is a lot of truth to it. Especially so when there is a big difference in motivation in regards to the final grade.
Not to say there aren't a lot of upsides to group work. But stopping people that don't wanna engage with the subject isn't one of them.
> Perhaps the same course could be run in California and would still attract the same level of cheating as the checkbox style one.
Actually, that's not a hypothetical scenario: School 42, an offshoot of the school GP described with the same cursus, has antennas in both Paris and California.
I wonder if the cheating rates differ between the two.
I went to UW Seattle ~15 years ago, and some of the CS classes there were a fucking joke
I got a 2.7 in my CS classes because I had perfect scores on the tests and homework, but the teacher decided to make a huge percentage of your grade consist of quizzes about random factoids from the book that had literally nothing to do with computer science. (Literally: "What color was the elephant on page 12?") The book cost like $200, so I decided to just study from other books and the lectures and eat the loss of GPA
Man, I wish I could remember that professor's name so I could send him some hate mail..
I recently took a CS class at Stanford with an interesting policy on cheating. While cheating almost certainly happened during the course, at the end of the quarter the course staff made a public post allowing any student who cheated to make a private message to the staff admitting they've done so.
If a student admitted to cheating, while they would face academic disciplinary action (i.e. receiving a failing or low grade), they would not be brought up to the administrative office that deals with issues of academic integrity, and therefore would not face consequences like expulsion or being on official academic probation.
However if a cheating student decided to risk it and not admit their guilt, they were at risk of a potentially even greater degree punishment. The course staff would run all students code through a piece of software to detect similarities between each other, as well as online solutions. Students who were flagged by this software would then have their code hand-checked by at least one course staff, who would make a judgement call as to whether it seemed like cheating.
I found this policy quite interesting. As a former high school teacher, I've certain encountered teaching in my own classes, and have historically oscillated between taking a very harsh stance, or perhaps an overly permissive one.
The one taken by the lecturers of this course offered a "second chance" to cheaters in a way I hadn't seen before.
That sounds great and all but I honestly have doubts about this software that detects similarities… there’s only so many ways to solve the bland questions that professors lift from books; kind of ironic. I’m assuming it’s basicallly doing AST analysis and it’s no smarter than eliminating things like variables being renamed.
They are basically stating that this “software” is 100% accurate. Furthermore it’s then left to whims of some TAs?
No algorithm can detect cheating unless the number of permutations are very very large (I.e being struck by lightening). Maybe one way to offset would be to use data as the student is entering the solution but that was never the case for us; just upload the source code to their custom made Windows app.
Speaking from experience using similar software on students assignments, it is often blatantly obvious when cheating is occurring.
To start with, at an undergrad level, most students had fairly distinct coding styles - usually with quirks of not "proper" coding. Some cheaters had the exact same quirks in multiple students assignments.
Also, some cheaters had the exact same mistakes in their code, on top of the same code style.
Yes the software picks up people that write correct solutions with perfect syntax, but those are the ones that you just toss out because there isn't any proof there.
The people that get caught cheating generally don't know what correct solutions and good code look like, so they don't understand how obvious it is when they copy paste their friends mediocre code.
I agree with you. I run a days science department in a corporation and when I'm doing code review for a junior, I can tell what was original and what came from somewhere else. Fortunately, in the workplace context that just means trying to get people to paste the SO URL as a comment above the appropriate code block.
Assuming that the software detects a similarity between two or more student’s submissions, how do you know which students cheated? What if one of the students (the one that actually did the work) had their program stolen/copied somehow (eg left screen open in lab or print out of code)?
I teach some courses with coding assignments and we just tell the students very clearly and repeatedly, at the beginning of the course and before each submission deadline, that submitting duplicate material means failing. It doesn't matter if A copied from B, B from A, both copied from an external source, or even A stole B's password and downloaded their data. The penalty is the same. We cannot go into such details because we just don't have the means to find out, and some students are amazing at lying with a poker face.
It's a pity to have to fail students sometimes because they failed to secure their accounts and someone stole their code, but they have been warned and hey, securing your stuff is not the worst bitter lesson you can learn if you're going to devote your career to CS, I guess...
Cheater Student enters the lab, turns on the video camera on their phone, walks casually behind other students recording their screens, reviews video for useful information. Other students fail. Seems like a poor outcome that is plausible and unfair to the student whose info was stolen to no fault of their own.
Indeed, it's plausible enough that I've actually caught students trying to do that.
The problem is: what's the realistic alternative? Just letting cheating happen is also unfair (to students who fail while the cheater passes). And finding out what exactly happened is not viable because students lie. We used to try to do that in the past, but the majority of the time all parties involved act as outraged and say they wrote the code and don't know what happened. Some students are very good actors, many others aren't, but even when you face the latter, your impression that they are lying is not proof that you can use in a formal evaluation process and would withstand an appeal.
So yes, it can be unfair, but it's the lesser evil among the solutions I know.
On the one hand, as we know from the P vs. NP problem (at least if we assume the majority opinion), explaining a solution is much easier than coming up with it... and even easier if they copy from a good student who not only writes good code, but also documents it.
On the other hand, even if I am very confident that a student didn't write the code because they clearly don't understand it (which is often the case), this is difficult to uphold if the student appeals. For better or for worse, the greater accountability in grading and the availability of appeal processes means that you need to have some kind of objective evidence. "It was written in the rules that duplicate code would not be accepted, and this is clearly duplicate code" is objective. "I questioned both students and I found that this one couldn't correctly explain how the code works, so I'm sure he didn't write it" is not.
Note that I do this kind of questioning routinely (not only when cheating is involved) and take it into account in grades, because it of course makes sense to evaluate comprehension of the code... but outright failing a student on the grounds of an oral interview can easily get a professor into trouble.
> On the one hand, as we know from the P vs. NP problem (at least if we assume the majority opinion), explaining a solution is much easier than coming up with it... and even easier if they copy from a good student who not only writes good code, but also documents it.
You can ask “tricky” questions that someone who understands the material shouldn't have a problem answering, such as “if the problem required you to also do this, how would you change your code?”.
"I questioned both students and I found that this one couldn't correctly explain how the code works, so I'm sure he didn't write it" is not.
Fair enough. But at least you can give a bad grade for not understanding the course material.
I would let 100 people cheat if it meant I was sure 1 innocent student wasn’t punished unjustly.
People that don’t cheat may benefit in the future for not doing so.
I said May here because I generally found university education to be useless for myself. Instead, I wish I had met folks I consider mentors at work, earlier in my life.
> I would let 100 people cheat if it meant I was sure 1 innocent student wasn’t punished unjustly.
This makes sense in the justice system, but in the justice system you often can find proof as to what happened, so the system still acts as a deterrent even if a fraction criminals get away with no punishment. In university assignments, most of the time it's practically impossible to find evidence of who copied from whom, so applying that principle would basically mean no enforcement, that everyone would be free to cheat and assignments would just not make sense at all.
Also, failing a course is far from such a big deal as going to jail or paying a fine. At least in my country, you can take the course again next year and the impact on your GPA is zero or negligible. You will have an entry in your academic record saying that you failed in the first attempt, but it won't be any different from that of someone who failed due to, e.g., illness.
If the consequences were harsher (e.g. being expelled from the institution, or something like that) then I would agree with you.
When I was a TA checking "Intro to programming" HW assignments, my brain was the similarity check software.
Anyway, when I detected two basically-identical submissions, I would call in both students to my office. I would chide them, explain to them that learning to code happens with your fingers, and that if they don't do it themselves, then even though they might sneak past the TA, they'll just not know programming, and would be stuck in future courses.
The I would tell them this:
"Look, I have a single assignment here, with a grade, on its own, of X% (out of a total of 100%), and two people. I'm going to let you decide how you want to divide the credit for the assignment among yourselves, and will not second-guess you. Please take a few minutes to talk about it outside and let me know who gets what."
Most times, one person would confess to cheating and one person got their grade. For various reasons I would not report these cases further up the official ladder, and left it at that.
It becomes obvious when you ask them to explain the code. At my university I once overheard a boy and a girl presenting some code "they" had written to a TA. The TA asked them some basic questions on while-loops and function calls. It became obvious that the boy had written all the code and the girl had no clue. So the TA decided that the boy had passed but that the girl had to come back and present the code herself on the next session.
It doesn't matter, both violated academic integrity by letting the copy happen. (Submissions are never stolen) If you think letting copying happen is less severe, you ask them and rebalance based on the work. Most of the time 'they made it together'.
Curves are the lesser evil. There are professors who dont give good grades at all. If you select a curse run by them cant get more than a grade C.
Meanwhile there are other professors where everyone gets an A easily.
Most students will probably pick the easy professors who give only As - because for them the degree is just a ticket for the job.
In fact those "tough" professors can have an adverse effect on those who picked the harder route. If you dont get good grades, you will have a lower GPA and that dream company will not even invite you for an interview. Some automated HR system will reject your application. They dont care that you went to a professor who taught you a lot -> they only see the low grade.
Same for scholarships - tough professor makes it already difficult to have good grades, but if you are graded without a curve, you get bad grade -> and can lose your scholarship.
Nobody cares about you as a person, or your knowledge, they measure you by your grades.
This is a tragedy of the commons in some ways: professors are supposed to give good grades, otherwise students wont choose them. Those who want to know more, are punished for it - in multiple way (first of all, they need to study more, but then they get a lower grade, which means lower GPA, what can lead to worse job offers, no scholarships etc).
If you want to be a "popular" professor, just pass everyone?
On a side note, in those great universities, dont they pass everyone anyway? I think frontpage had an article some time ago, that when you get to Ivy League, you will get a B or C even if you are bad, they generally dont kick out students who try to study, but arent particularly good.
Curves wouldnt be needed if every course had an objective list of material that should be learned - but even this is difficult - and not comparable between professors on same university, not to mention different ones - despite standards and various efforts (not to mention measuring if students really can know the whole list)
How is sharing knowledge "violating academic integrity"? Unless given specific and explicit instructions not to reveal working solutions, then sharing your code is literally just "helping" others, it's up to them to either study it and produce their own versions, or jut blatantly copy & cheat.
Because each university has university-wide rules forbidding sharing assignment solutions. It is explicitly forbidden even before starting the course unless the syllabus or professor directs you so.
You can't "help" others on their own assignments by giving your solution. You can't receive direct "help" either.
Edit: here's the text of my alma mater:
Any behavior of individual students by which they make it impossible or attempt to make it impossible to make a correct judgment about the knowledge, insight and/or skills of themselves or of other students, in whole or in part, is considered an irregularity that may give rise to an adjusted sanction.
A special form of such irregularity is plagiarism, i.e. the copying without adequate source reference of the work (ideas, texts, structures, designs, images, plans, codes, ...) of others or of previous work of one's own, in an identical manner. or in slightly modified form.
In my time in college I helped a lot of fellow students work through a lot of assignments. I sat down with them and helped them to think through the problem and find examples to learn from that weren't full solutions to the assignment. I helped them find difficult bugs in their implementation by pointing them in the right direction or showing them debugging tricks I found helpful.
What I didn't do was show them my implementation or even talk about how I solved it. Yeah, doing it the long way takes a bit more effort, but the result is that the students I helped actually understood the code they submitted and were better equipped to solve the next assignment without help.
I implemented the widely used MOSS algorithm (mentioned by a sibling) for my CS department in my senior year. That algorithm doesn't do AST analysis, it just looks at the plain text in a way that is resistant to most small refactorings. MOSS compares sets of k-grams (strings of k characters) between every pair of projects that are under test and produces the number of shared k-grams for each pair of projects. On any given assignment in a given semester, there's a baseline amount of of similarity that is "normal". You then test for outliers, and that gives you the projects that need closer scrutiny.
On the test data we were given (anonymized assignments from prior semesters together with known public git repos), we never had a false positive. On the flip side, small refactorings like variable renames or method re-ordering still turned up above the "suspicious" threshold because there would be enough remaining matching k-grams to make that pair of projects an outlier.
Our school explicitly did not use the algorithm's numbers as evidence of cheating and did not involve the TAs--the numbers were used only to point the professor in the right direction. We excluded all k-grams that featured in the professor's materials (slides, examples, boilerplate code). It also helped that they only used it on the more complex assignments that should have had unique source code (our test data was a client and server for an Android app).
My sense was that this was a pretty good system. Cheaters stood out in the outliers test by several orders of magnitude, so false positives are extremely unlikely. At the same time, the k-gram approach means that if you actually manage to mangle your project enough that it's not detected as copied, you had to perform refactorings in the process that clearly show you know how the program works--anything less still leaves you above the safe zone of shared k-grams.
From doing some cursory research, it appears the software in question is called MOSS (Measure of Software Similarity) and is currently being provided as a service [0].
Since it is intended to be used by instructors and staff, the source is restricted (though "anyone may create a MOSS account"). According to the paper describing how it's used [1], "False positives have never been reported, and all false negatives were quickly traced back to the source, which was either an implementation or a user misunderstanding."
I used something similar when I was a TA 20 years ago and while your assumption seems reasonable, there are actually a lot of different ways to solve even quite simple tasks and most cheating is very obvious on manual inspection.
Yep... If you're going to go through the effort of completely rewriting a piece of code to try and dodge an AST analysis algorithm, you've effectively just done 70% of the work and put your grade/position at the institution on the line. It's not worth it, and so people don't tend to do that. It's the same thing with plagiarism—students could very well resynthesize a stolen work in their own words. It would still be plagiarism, sure, but it's also putting in a large amount of effort while still being risky.
Well, no. It's still plagiarized if you fail to communicate that it isn't your original work. You can't just steal ideas from someone else's paper, even if you rewrite everything. If you rely on another paper for inspiration, you have to cite it. If a student submitted a paper that was just another paper entirely rephrased, that would not be acceptable in the least even if they cited their source because the expectation of writing a paper is that you contribute something novel and not just regurgitate someone else's argument
If the problem is large enough, I do submit that there are multiple (even many) ways of solving it.
I will also say that there’s problems where that is not the case. For example, we were told to write simulators for scheduling schemes (RR, MLFQ). Other than using different data structures (even that’s a bit of a stretch) not sure how much variance there will be.
Using the right tool for the right job is important.
Just above your post another author posted/cited results of a system that “never produced false positives”.
I think that cited number another author shared is probably correct but presumably, the tool is used in cases where problems are big enough to warrant it.
The problems we had were way way simpler than anything deserving an acronym. You'd think there was only one way to do it and yet it was not hard to distinguish plagiarism.
I noticed a swap in your prose (still comprehensible), but just realized that cheating and teaching are semi-spoonerisms (swapping the sound order of a single word)... how appropos!
I have the same policy at my uni in Poland.
Admit to cheating without being called out? Depending on the professor mood, they either allowed you to retake exam (though the best grade you'd get was the lowest passing one), or you just fail the course and try again next year.
Both they had less paperwork, and well, they wouldn't report that person.
The results of checking against existing and other test takers solutions must be taken with a strong human judgment. Programming problems such as would be asked in tests are essentially like mathematical formulas/algorithms, and there isn't much variation in how a given formula or algorithm can be implemented.
I don't think these techniques are often applied to problems in tests - there are other, simpler ways of catching cheaters there.
They are much more likely to be applied to homework assignments, where the opportunity for copying is large, but the chance of two students producing the exact same >500-1000 line program is slim to none. Perhaps once in a while a critical function will be copied and no one will realize or similarities in a trivial function will be unnecessarily flagged, but this will be relatively rare and quickly discovered in manual review.
There is a lot of syntactic variation possible, both for formula and algorithms. Even for something as simple as quicksort there is enough natural variation for a class of 30 maybe even 100 (if no references can be used).
Anything more complex and even with references it should be unique.
It's not _just_ trying to be lenient and offer a second chance - it's a way to catch more cheaters. "Turn yourself in and we'll go easy on you... because we might not catch you."
>I knew that my students wanted a second chance, I wasn’t sure how many of them would take it. Part of completing the academic integrity assignment was a tacit admission of cheating, and some students seemed set on not admitting to anything. So, I was thrilled when I received the first completed academic integrity assignment.
>What did the student have to say? There were many full sentences and as I read them I got that feeling again. So, I copied and pasted some sentences into Google, and yup, the student was plagiarizing the academic integrity assignment. Whole swaths of text verbatim copied.
How broken of a person do you have to be to reflexively cheat on a simple assignment intended to give you a second chance after you’ve been caught cheating already? How can that be the first thing you go for? This is really sad. I seriously don’t understand the thinking here.
> How broken of a person do you have to be to reflexively cheat on a simple assignment intended to give you a second chance after you’ve been caught cheating already?
My opinion is that most people only go to college these days because they perceive it as the only path to a good paying job, not because they are terribly interested in the material.
Given that assumption, it's easy for me to understand why people would try to minimize the amount of work they have to do for something they didn't want in the first place.
> My opinion is that most people only go to college these days because they perceive it as the only path to a good paying job, not because they are terribly interested in the material.
I know most people here refer to American system and universities but my experience in Poland is, degrees, especially related to computer science, mostly teach you useless, outdated stuff, that you will never need to know. Most people here approach CS degree as something that you simply have to do to get a decent job in IT(though it's slowly changing toward not doing a degree at all), or simply because your family expects you to. Most universities here pump out diplomas to get paid by government and industry, so they can pump out more diplomas. Honestly I haven't met a single employer or even coworker that ever cared if I had CS degree. Most of stuff I learned there is 'nice to know it exists' but nothing I've ever used in real life work, most professors clearly have never even worked in the industry. Best thing that came out of my BEng in CS are friends and realization that noone cares about me so I needed to solve issues and learn on my own.
100%. People I know here at a T-20 Southern California school just cheated their way through classes, just grinded leetcode through university and have high paying FAANG jobs. It's just a matter of what you want from an university education and why you want a degree tbh.
The original cheating, maybe. Cheating, doubling down by lying, and then cheating again on the get-out-of-cheating-free assignment seems like it’s only adapted to a future life of white-collar crime.
I could imagine someone seeing this assignment as a formal requirement. And googling "integrity assignment", just like one might google how to fill out some tax related forms. Or perhaps even how people google "birthday wishes".
As a student witnessing the amount of cheating going on, I was always surprised about the noise raised by teachers on it: I always felt that my score was my own, and didn't care about comparing against others.
Perhaps that's why I didn't care?
Another thing is that college is voluntary, and everyone takes the courses for some perceived gain. If it's just a diploma with high GPA, I let them be.
There are also plenty of ways to legitimately score a high grade without really engaging with a course (basically silly ways to study just to pass), which in the end result is not much different from simply cheating (there was no appropriate engaging in the material) — while the main difference is in fairness, that's a moral value that's beyond some random teacher's ability to teach adult students — so I don't see why bother.
The main question I have for the author is if they would have offered the same get-out-of-trouble alternative syllabus if they had 10% of the students cheating? Basically, how influential was the proportion of students to be failed in their huge investment in reworking the course?
Obviously, they did a bad job with the original syllabus in promoting exactly the behaviour they didn't condone, but one should never discount the thrill humans experience in engaging in risky behaviour (like figuring ways out to cheat which is sometimes more work than studying, but more thrilling — and helping others along the way adds a nice cherry on top).
I always felt that my score was my own, and didn't care about comparing against others.
Tough to do when you’re sharing a curve with a bunch of cheaters, and the grades matter for your future.
I know in the program I attended I was up against a fair few who were taking cognition enhancing drugs, others who had exam copies from prior years to help them prep, and a lot of people who copied each others’ homework. It was frustrating to be on a curve with them.
I had a few professors who didn’t use curves. It was wonderful.
I think curves are in general unethical due to cheating, and feel they’re a sign that a professor hasn’t done the hard work to really zero in on exactly what knowledge the student is expected to master.
For what it's worth, I feel similar to you, including for sports. Basically, life threatening things should be prohibited, but everything else is free-for-all.
This would eliminate a lot of cheating, and a lot of advantages for those in a good position (better access to new drugs and nutrition, better recovery programs, better training programs — aren't they all unfair at some level?)...
The ultimate goal is to get us to experience the top level combination of talent and effort, both in science/work and otherwise. Getting there is never going to be completely fair (hey, you scored better on it even though you prepared for 3 days and I took 30: tough luck for me, I guess, but the fact you are more talented for that exam is not something I can do anything about).
I've also seen non-cheating people who are excellent at exam taking (great scores) without ever taking anything from the actual material (zero learnings). I've never felt threatened by them either, though maybe I would have if I wanted to pursue an academic career.
There's a big grey area between the life threatening stuff, and the stuff that will slowly mess you up for life. Simple example, but the drugs people take for pain increase your odds of a heart attack if taken habitually. It feels deeply unethical to have "take drugs that will ruin you once your career is over" be the minimum requirement for a career in sports.
Sure, that big grey area also includes paracetamol, alcohol, smoking, even caffeine... — all allowed for both students and athletes even though we know of harm they can produce.
Many sports are life-ruinous by nature (check out those NFL head injury studies), yet we incentivise people to take part in them (by paying a lot for the games).
I always cringe when I hear from pro sportspeople how engaging in sports is promoting a healthy lifestyle: I mean, sure, unless overdone like all pro sports do.
For both academics and the olympics though, this ends up being an arms race that ends up just hurting those who participate. You wouldn't want an all natural, 90th percentile athlete to feel like the only way they can get ahead (or, worse, even just stay where they are) is by taking drugs with dangerous side effects. Similarly, we surely don't want students who are already doing just alright to feel like they need to get an off-script bottle of adderall in order to not fall behind their peers.
Okay, then I'm going up against people who are willing to risk their health to gain an advantage. Should I be penalized because I'm trying to make sure my equipment's going to last as long as I need it?
I suppose we accept that in sports -- even without doping, if somebody's gonna sacrifice their body to make a play then that's their call -- but in academia, too?
A vitality curve is a performance management practice that calls for individuals to be ranked or rated against their coworkers. It is also called stack ranking, forced ranking, and rank and yank. Pioneered by GE's Jack Welch in the 1980s, it has remained controversial. Numerous companies practice it, but mostly covertly to avoid direct criticism.
It's my opinion that Jack Welch was extremely bad for America. He did everything Deming pointed out didn't work long term and focused on profit above all, and got extremely lucky in the financial sector. As soon as the economy soured, his vaunted techniques failed miserably. Worst of all, he trained hundreds of future leaders to follow that model.
I’d agree that a zero-sum weakness-focused approach is maladaptive.
Trying to consider the other hand reminds me of question I had at an all-hands last year: “If the only raises are annual review based, does the inflation rate mean that everyone else takes an effective pay cut?” The response hedged on HR doing market adjustments. Maybe Welch was just being realistic? Maybe encouraging folks to change employment until they are in a position in which they excel is the better option long term?
I don’t know. There’s a saying “it’s cruel to be kind” and maybe I’m too soft to survive.
It's not only that, though - if you reward the top 10% of performers and fire the bottom 10%, say, but don't actually make sure that said performance is due to skill and not a degree of randomness you may not improve at all.
You also create an attitude of fear which is not conducive to a productive and adaptable environment in the long term. You can get away with it for a while, but it's not a good principle.
Get rid of the annual review entirely. Active management is better than passive with guiderails like prodding reviews. I've never been motivated by an annual review nor have I seen it successfully motivate others; have you?
The opposite is true in my experience. People fear it and become less productive as it nears, it takes time that would be better spent on other things and it's not personally rewarding for the manager or the worker. If done poorly, it also lowers team unity and especially doesn't work as a reward because people don't recognize the behavior that led to it. If you reward behavior right after it happens, people associate the behavior with the reward. If you wait six months, they don't. They can intellectually but the team impact is lowered. Not to mention if you're individually evaluating a team based on arbitrary statistics you miss the people who hold everything together. Nobody wants to help their teammate if it will cause their teammate to get a raise instead of them.
Finally, it causes people to game the system instead of improving their work because the work improvement has less impact on their remuneration.
All that to say I don't think the annual review is a good tool.
I agree with your opinion on curves (which aren't even a thing where I'm from), but cheating matters even in the absence of curves.
If GPA is a factor to achieve certain jobs, positions, grants, PhD programs, etc. (which it obviously is, to varying extents depending on countries, but AFAIK it always is) then someone who is inflating their GPA via cheating can basically "steal" your job/PhD/etc., curve or not.
> I think curves are in general unethical due to cheating, and feel they’re a sign that a professor hasn’t done the hard work to really zero in on exactly what knowledge the student is expected to master.
I disagree; there's no objective criteria for what students should be "expected to master" in a particular course. it's inherently relative to what the typical student at that institution is capable of. a class where everyone gets an A is probably a waste of time for everyone involved. it strongly implies that more material could have been covered.
if a whole institution is like this, it gets back to the original problem. when everyone else is graduating with a 4.0, a 3.8 looks a lot like a 2.0 from a more rigorous school.
ideally, the material itself would be designed to get a good distribution of As, Bs, and Cs with a few Ds and Fs for people who didn't try or understand at all. but it's pretty hard to get this exactly right. better to err on the side of making things a little too hard. then the occasional bright student will really shine, and you have enough signal to compress the range into the expected letters at the end.
there's no objective criteria for what students should be "expected to master" in a particular course
Not exactly, but depending on the course you can get pretty close. In my engineering statics and solids classes, it mapped well with what you'd be expected to do when working as a stress engineer (which is what I worked in after school). In my heat transfer course, it mapped well with the responsibilities of a thermal engineer.
ideally, the material itself would be designed to get a good distribution of As, Bs, and Cs with a few Ds and Fs for people who didn't try or understand at all. but it's pretty hard to get this exactly right. better to err on the side of making things a little too hard. then the occasional bright student will really shine, and you have enough signal to compress the range into the expected letters at the end.
And that's exactly what my professors who didn't curve managed to do. It was clear they worked very hard at prioritizing the important material, teaching it well, and testing it fairly. It was a breath of fresh air.
But when the average score on an exam in one of my other classes was 22% -- and they weren't looking for the next Einstein, it was just another upper division engineering class, presumably the geniuses would have revealed themselves by that point -- it was clear that the professor wasn't even trying. Throw a bunch of crap on the test and let the curve sort it out, so the professor could get back to what they really wanted to spend their time on: research.
> when everyone else is graduating with a 4.0, a 3.8 looks a lot like a 2.0 from a more rigorous school.
I've never heard of this interpretation before, it seems this is a difference in whether the GPA should represent a student's actual grade on assignments, or the student's overall achievement relative to their piers. It seems the curve exists for the latter ideology - you can't expect every FAANG recruiter to say "well they got 2.9 from Georgia Tech, that's better than this 3.4 from Duke", if they did, you'd probably have pretty arbitrary hirings (although, if it became policy, I can see some Googler making an internal tool to 'normalize' school GPAs); although it seems MIT has a "no curves" policy and graduates still manage top-tier GPAs.
I can't speak to what goes on in FAANG recruiting, but I was involved in hiring for a smaller company that recruited heavily from regional schools. we absolutely knew which schools were harder, as most of the younger engineers had graduated recently from that same set of schools. obviously GPA doesn't tell the whole story, and we preferred to decide based on work experience. but for junior hires and especially interns, there's not always a lot of signal to decide on. all things being equal, we would prefer someone with a 3.0 (or even lower, with a good explanation) from the rigorous stem school over someone with a 4.0 from the well-known party school. which is too bad, I'm sure there were some very bright people who went to the "party school", but their grading policy made it very difficult to distinguish them from their peers who barely had a pulse.
Curves typically ignore outliers (because otherwise they would be useless, there’s always that one kid) so unless everyone except you is cheating you’re usually fine.
There are two cases, usually, where curves make sense.
* When the professor doesn’t actually know how hard the exam is because it’s a new test. And since people save tests that’s most classes.
* When the professor is actually trying to find that one kid. This is super common in theoretical maths. The exams are incredibly hard with the expectation that you won’t finish it and graded on a curve or some other measure like “the test is out 100 points but there are 200 possible.” But when someone gets a perfect score you direct them to the phd program.
Curves typically ignore outliers (because otherwise they would be useless, there’s always that one kid) so unless everyone except you is cheating you’re usually fine
There's a big range between "that one kid" and "everyone". In some of my courses it'd be easy to believe 15% were cheating in some way. Another comment in this thread put the share at 50%. How's a curve going to deal with that?
> Tough to do when you’re sharing a curve with a bunch of cheaters
Are curves still that common these days? In my time at university, the only classes that got curved were a couple math classes that were curved in the students' favor.
Curves are extremely common in STEM classes today at Berkeley. I think I've only had a handful of upper division classes that were not curved. A lot of the internal discussion on curving vs not curving is based on the fact that building an exam which generates a good distribution is really hard, especially for classes where the understanding is very stratified due to differing backgrounds in the area.
Cheating still a fairness issue even if the curve is "in your favor". The higher the scores of your classmates, the lower your post-curve score will be.
Sometimes a "curve" can be a test that isn't norm-referenced, but instead just curved up on a straight static curve. For example, 10*sqrt(n) on a very difficult final can provide a grade boost to lower grades. It might be easier to just raise grades than to modify the test, if you see students that you feel should have passed, fail.
over a decade ago, the only curved class I took was a calculus class entirely full of kids who went to college at 16, and the guy teaching it seemed to have no issue declaring that this class should contain the same proportion of C's as all his others, regardless of how well we learned calculus :)
In my experience it's to help more people pass. If the natural distribution has 70% as the average, no curve is applied. Above-average outliers (the occasional 100% scores) would be removed from the calculation.
How do you know if grades matter for your future? In my first uni year, I had no idea I'd get a job before my studies were over.
If there's an actual correspondence (eg you get next year scholarship for your studies only if your GPA remains above X), that's an incentive to cheat, so there is one issue.
And while curves do suck, it also sucks to be compared with someone having photographic memory in most exams where that is a very useful skill (even though the exam is not sttempting to favout photographic memory). Or some lazy bag who is more talented at something so it took you 10x more effort to get the same understanding. Basically, you are stacked against so much, that cheating is just a small part of all of that.
In short, it sucks being compared to people using everything they can to their advantage. But then again, that's what happens past university too, so it's just real life.
> As a student witnessing the amount of cheating going on, I was always surprised about the noise raised by teachers on it: I always felt that my score was my own, and didn't care about comparing against others.
Years ago a few students in my class were complaining about cheaters. They were frustrated, and one even accused me of missing "obvious" cheaters. It was embarrassing for me, and brought down morale in the class. I have policed exams more aggressively ever since.
In another class, I caught a cheater during an exam (Calculus 2 or 3), and one of his classmates e-mailed thanking me, noting the student cheated his way through the prerequisite class the prior semester.
Oh, it sure does matter. Like it matters to most every student (teenage or college) how they are perceived by their most popular peers. I.e. it's a human trait to care about things that should not really bother us.
You voluntarily agree to abide by the academic integrity rules. If you don't want to do that, you can voluntarily go to a different institution with different rules and standards. The goal of the place is learning not pointscoring and cheating undermines that.
Yea but the pointscoring is literally all that matters, if you change the incentives you’ll see the behavior change with it.
If you take away the aspect of college as “a place to get a credential” you’ll see the cheating stop. Instead for those credentials just hold exams like the AP, ACT, SAT, RHCSE, or the 7 Actuarial exams. No college required. Whatever you do to pass them is fine.
Then make college totally ungraded except as a mechanism for student feedback. Have tracks for people that just want the credentials (just like the APs) that terminate at the exams. All other courses are just for people who are genuinely interested and confer no status or praise.
Now the incentives are aligned. Outside of the testing areas there is literally zero reason for anyone to cheat, and non-credential classes have to actually be interring, engaging, and useful to students for anyone to take them.
>If you take away the aspect of college as “a place to get a credential” you’ll see the cheating stop.
No, you won't. People will cheat because it is perceived to be easier than doing the work generally, even if they don't have the external incentive of the score meaning something.
People cheat at casual games of "Call of Duty;" not even ranked.
If you are doing that, why do you care about others not doing it?
I went there for learning, and I never felt that was undermined by others' cheating.
How does cheating undermine learning for non-cheaters in college?
I can see loss of motivation or external pressures (family or scholarship demanding a particular GPA) when you are curve graded, but that means that one cares not only about learning — which is ok, we all care about ranking to some extent, but as long as you recognize that it's a flawed system, you can either focus on that or focus on learning imho. And accepting that someone else cares about grading more than you do (which pushes many into cheating as well).
Edit: Oh, and loss of motivation for the teacher, as brought up by the author in the article.
If you are doing that, why do you care about others not doing it?
Because there is no way to measure whether the teaching and learning is effective if you just make stuff up. There is no way to do research if you just make stuff up. There is no way to advance human knowledge if you just make stuff up. It's not some convoluted thing, a lot of systems, probably most you encounter in adult life in an industrialized society, depend on essentially voluntary cooperation.
That's not to say the way universities work is somehow optimal but again, as you point out yourself, you don't have participate if you think their methods are too poor to bother with.
I am not sure what type of making stuff up are you referring to? How does that flow from my claim that cheating won't affect learning for those who don't cheat?
From a purely depressingly pragmatic perspective, yes you are correct. But for me it was an opportunity to be immersed in a world of abstract knowledge and the exchange of ideas - an experience that I would not trade for anything.
It is depressing when I hear from people otherwise, I can't imagine missing out on the joy of learning.
Some people simply have better things to do, or don't care about learning a specific thing. For example, I care about learning programming in my CS classes. No, I don't care about learning who died in 1938.
So I think the teacher missed the main point of his own essay:
"The argument was that chat groups have become indispensable tools for students taking courses online during the pandemic. The essay detailed all of the useful info passed around in chats. I totally agreed with this point....Their strategy was to leave the chat before every quiz and midterm so that they couldn’t be there for the cheating. Then they rejoined afterward."
So, in order to be competitive in a class where (whether explicitly curved or not) the difficulty will be adjusted up or down until the "right" portion of students are passing, even a student who wanted to not cheat needed to be in the chat. He made a course in which it required extraordinary efforts to find a way to be able to both pass and not cheat, and then acted surprised that so many students cheated.
Any teacher who can fire up R to process the chat group logs, could have figured out a better system for quizzes and tests, so that it wasn't this hard to be competitive without cheating. Also, if he hasn't ever taken a course on game theory, he should; if he has, he shouldn't have passed.
Not only was the course not "too hard", the author give plenty of indications that the course is actually pretty easy. The quizzes everyone was cheating on were open-book. At one point, a student in the group chat recommended that people look things up in the course textbook:
"The best advice was a student telling everyone they could just go to the website for the textbook, then control-F in the textbook and search for words in the question to find the answers. I mean, ya. It’s in the textbook."
This does not sound like a difficult course to me. I have taken courses where it's rather trivial to ace the online quizzes by looking things up in the textbook (I am taking one right now, in fact). Students who can't even do this and resort to sharing the answers in a group chat must have become incredibly jaded about their education. I fully agree with the author's decision to both sanction them for cheating and give them a second chance to engage with the course material. It was very satisfying to see that all of the effort paid off in the end.
Which is one of the most interesting takeaways for me. What this guy was dealing with was a cultural problem - a toxic culture had developed in the temporary group/space.
His goal is clearly stated: for people to honestly connect with the course material. That's hard if a group like this is poisoned with cynicism. My thought here is how these types of messaging groups spaces have become an incredibly important aspect of the educational experience and beyond. I think about all the 'toxic' cultures in places I have been in and how they usually revolve around people trading information and socializing.
"The argument was that chat groups have become indispensable tools for students taking courses online during the pandemic...I totally agreed with this point"
As much as the OP claims they take cheating seriously and as an opportunity to engage with students and have them learn amongst other noble undertakings.
I think they just really enjoy catching people cheat and see how far it goes. Sort of masochistic tbh
I’d like to add, that supposed “superstar” student who read the chat to learn the other class info and left to not read the cheating… “sent no texts”.
As someone who recently graduated, this student is a freeloader. Comparable to companies who heavily use OSS but never contribute. I expect them to be a terrible coworker.
I had to stop in the middle of the article due to all the annoying animations.
But something that stood out to me is this:
> For consequences, I came up with a three strikes and you are out rule.
and then
> I wasn’t ready to inform them about what was going on until I had processed all of the facts, so I just pressed on with the lectures. My goal was to have all of the forms filled out and emailed before the next midterm. I tried as hard as I could. But, I couldn’t get it done. I had to give the next midterm, and I knew that probably meant a bunch more cheating.
So basically, this professor know about "low-impact cheating" (cheating in quizzes, where "[t]he quizzes were low stakes"), but instead of saying anything just kept pushing forward.
I wonder if anyone even told those students up front in clear terms that sharing answers on the quizzes was not allowed. In school we're often told to co-operate in assignment. Where is the line between an assignment and a quiz?
Just letting the whole group slide gradually into cheating territory is a lose-lose strategy.
Give me a break. The professor went above and beyond to be understanding. The mental gymnastics from many on this thread seeking to blame anyone but the students lack of integrity is pathetic. It's from the Boris Johnson school of 'I didn't know it was a party'.
This is a college course, right? Do you really not understand what it means to cheat by the time you get to college? A quiz especially--you should not be sharing the answers to questions with other students. It's just about the most basic understanding of cheating you can imagine.
You also shouldn't be sharing the answers to assignments unless the teacher/professor says you can do so.
And finally, read your school's handbook for a proper definition of cheating.
The professor sets up the expectation they are going to come down hard on everyone but ends up putting in an obscene amount of work to give almost everyone a chance to succeed.
After the halfway point they tilt strongly from justice to mercy and ethics education.
He was curious how far rhis would go, and enjoying this undercover cop thing a little too much.
A course should tell you what works and what doesn't early on, just like a good computer game. If people go down the wrong path early on, it means early on something failed.
Anyway, I don't see anything in the article about what the academic office said when he turned all those reports in. I'm really curious, these kinds of situations tend to make national news when some prof brings a large number of students up on academic honor code violations. (ex: https://www.military.com/daily-news/2021/08/20/least-100-nav...)
I'm firmly of the opinion that the negatives of group study out weight the positives and generally should be banned in college. Banging your head against a problem for hours at a time assures you understand all the wrong ways of getting the right answer. This means you have a far more detailed understanding, and there is always formal academic study/the prof/etc and the possibility of asking about difficulties in class.
And finally, group chat is small potato's, I'm fairly certain there is a fair bit of 3rd parties basically doing assignments/taking tests these days for students. Primary because I was "mentoring" a student in a subject and found myself in a situation that I had to extract myself from because it crossed a line of mine.
> I'm firmly of the opinion that the negatives of group study out weight the positives and generally should be banned in college
That's a... strong opinion.
I'm genuinely at a loss of how you can think that. If studying in groups was harmful enough to be banned by colleges, then shouldn't online classes have been tremendous boons to learning? On a personal level, I can barely straighten out two thoughts in my head unless I have someone I can talk it over with, I couldn't imagine collaborative study being outright banned.
> I'm fairly certain there is a fair bit of 3rd parties basically doing assignments/taking tests these days for students.
I can confirm this.
Personally, I have seen as high as 10 %, but those numbers were heavily skewed towards students who could only speak Chinese. Not sure what was up with them.
I do not teach anymore, but I have heard that the situation became much worse in the last few years.
This thread, more than any I have read on Hacker News shows the
greatest variation of opinion, and the least ability for us to have
common agreement upon terms, normative behaviours, standards of
judgement or goals. Sadly, education really is a mess. I think it
reflects the schisms within wider society.
After 30 years in and out of "academia" I am exasperated and
despondent at the state of affairs and what I consider the total
bastardisation of the function of education.
The OP story of rampant, shameless cheating is all too familiar and is
simply grist for the mill. It's something to which most professors
have grown thick skins. This is the ordinary background against which
we have to teach, day in, day out.
Despite being a optimist in so many areas of life I see little
prospect of fixing this without extraordinary and radical changes in
the governance, funding and mission of universities.
Kids cheating seems to be an universal behavior, it happens in very different educational systems (US, Europe, Eastern Europe).
I remember when I was a student, cheating didn't seem that much of a crime, to me, or to my peers. We mostly understood that if you cheat you might (or not) have problems later because you just basically wasted time in that course. So it was an assumed risk.
It feels to me that the fight against cheating is basically fighting against (kids) human nature.
Also, snitching on cheaters was unthinkable. And not because of retaliation, but it just felt like very scummy behavior. In a way, like how criminal law in most places doesn't require family members to snitch on other family members who committed a crime.
> It feels to me that the fight against cheating is basically fighting
against (kids) human nature.
Yes you're absolutely right. Wars on symptomatic abstractions like
"drugs", "terrorism" and "poverty" are always failures and become
ad-hominem, as wars on addicts, wars of terror, and wars against the
poor.
As surely as thieving loaves of bread is a response to starvation,
cheating is just another symptom - as you say, a naturally human
response - to an unjust and impossible circumstances. That doesn't
make it "right", but the context at least offers us some
understanding.
People will stop seeing "selfishly gaming the system" as acceptable
social behaviour once our systems resume serving the fullest interests
of the people instead of trying to control, manipulate and limit them
to the benefit of the few.
I've personally never witnessed any cheating, heard any friends or students even allude to doing so, I never cheated myself and I can't even comprehend how you would cheat at most of the coursework I did in uni in Sweden.
Sure you could copy your friends code, but you still had to present it to a TA who would certainly get suspicious if you couldn't answer to what you had written. Ditto for exams and essays/reports.
Cheating, or elaborating on how to cheat, just wasn't efficient use of my time as a student, disregarding the morality of it. Frankly, it seems to me the way these students are being tested is not appropriate as it's easy to abuse and creates incentives for students to cheat.
> I've personally never witnessed any cheating, heard any friends or
students even allude to doing so, I never cheated myself and I can't
even comprehend how you would cheat at most of the coursework I did
in uni in Sweden.
For most of my life I would've been frightened to even say so, or risk
accusations of being a "swot", "teachers pet" or as the Aussies call
it, "a tall poppy".
But the attitude that it's "normal" to cheat is common within
US/UK/AUS culture now. Truth is I always considered cheating beneath
me and was simply _WAY_ ahead of every class I ever took - but you
can't say that in "polite" US/UK culture - one cannot be too pious and
clean-cut, one must seem little bit like "everybody else" - even if
everybody else is not really like that, if that makes sense.
So actually I've taught at universities in Sweden (Stockholm) and
Finland (Helsinki) and notice similar attitudes to those you say. But
those cultures aren't without dirty hands either. Here's the weird
thing: Tall Poppy Syndrome is culturally closest to Jante Law [1], a
Scandinavian ethos of "Don't think you're better than anyone else"
which is precisely the corrosive culture that means people cheat
because everybody else is cheating and causes a downward spiral or
race to the bottom.
"When you've done your very best
When things turn out unpleasant
When the best of men take bribes
Isn't it the fool who doesn't?"
-- Human League 1978
It wasn't until I became a professor that I even knew this dynamic
existed, but now I see it everywhere, not just in school.
To clarify, I'm not saying I would never cheat. Rather that cheating meant more complications and risks than completing my coursework in the way it was intended.
If we were given take-home multiple choice forms to be judged by I'm certain people would cheat. But I've never seen that, possibly because it feels like a pretty lousy (and lazy) way of evaluating and educating students.
Thanks for asking. I am an international visiting professor of
computer science (specialising in DSP, signals and systems). I'd
submit that's given me a fairly wide experience of higher education at
least around the north-western hemisphere over three decades. Enough
that I write regular and quite popular features for the Times. Any
sincere answer is surely the contents of at least one whole book, so
my response here would disappoint you. Instead please read my more
positive commentary here [1] and here [2].
Cheating is a consequence of unfair values that push otherwise morally
upright, hard-working people to transgress. Governance determines the
values of a system. Therefore rampant cheating may be seen as symptom
of a failure of governance, so improving it would reduce cheating.
Hope that helps you to see the connection.
I taught two courses to undergrad CS students at Stanford. I think about half of students cheated on problem sets and tests. The answers were all way too uniform and done without correction.
In the first course, there was a student who definitely didn’t cheat and was objectively horrible at all tasks assigned, consuming a frustratingly large portion of my grading time. This student ended up with a massively impressive resume including all the best places, and became the founder of a billion dollar cryptocurrency.
In the second course, the worst student also clearly didn’t cheat, and sometimes I gave extra points for answers that were wrong but the only ones in the class that were invented rather than regurgitated. That student founded one of the most influential companies in modern computer science, and also a 10 billion-dollar cryptocurrency.
I’m not sure what the lesson is. I had honest and very good students that didn’t do much of note, but there were only two honest but awful students, and both had superlative success. Just for reference, I was honest and pushed hard to land somewhere in the low middle, and I’ve had my share of ups and downs, but definitely no competition for the lanterne rouges.
Interesting that they both founded cryptocurrencies. It’s not surprising that academic success is uncorrelated with being able to run a good Ponzi scheme. Bernie Madoff was extremely successful in his time, but he wasn’t a prodigy either.
If this is a ponzi then I have to give them credit, to be doing research at a university level in order to swipe me of a bundle of dollars is amazing. They don't have anything about making themselves richer at the expense of new users either, for some reason it's all focused on developing a great blockchain that is sustainable and is usable for dapps which ultimately provides utility for end users.
I know what Cardano is because I've heard of it through the Haskell community. I bet most people have not. How do you sift through the thousands of ridiculous coins out there and say to yourself, "Oh, they probably have integrity"?
This continues to be a mystery to me as it seems no one can put themselves in the common man shoes. The common man doesn't care about tech. They don't care about the integrity behind it. They care about whether it can make them money.
List 5 coins I've never heard of and I'd probably say, "ponzi scheme" without hesitation. Do I actually know if it is? No. Does it matter? No. If the majority of shitcoins come off as a ponzi scheme, then why would anyone think something like Cardano isn't?
You don't need to know or trust any underlying technology though. The only thing you need is a basic grasp of Darwinian natural selection.
The "shittier" those "shitcoins" are, the quicker they'll die out. The ones that heavily incentivize early adopters might hang around longer, but they too will inevitably fizzle out if they don't offer any practical innovation.
But if something has a steadily growing user base year after year, maybe it's worth some attention after all?
Or you can just keep screeching "Shitcoins! Shitcoins!". I guess we can all settle for ourselves what form of discourse we consider more constructive and worthwhile.
GP said uncorrelated. I don’t think that’s an effective counterpoint, although GP seems to be indicating anti-correlation. Either way, this is about undergraduate students, and I also don’t think you can claim superlative undergraduate performance for that particular founder. Maybe crypto is where motivated underperformers found a place at the table because the overperformers are settled in more civilized places. Also, these research papers in cryptocurrency are probably not comparable to serious contemporary work in mainstream journals of pure math and theoretical computer science, but I am personally fascinated by the way that old research is slowly finding it’s way into applications through the sometimes absurd financialization of these ideas, enough to get scammed and swindled a bit myself.
Guess we'll find out which it is in the next few months. Proof of stake is just as vulnerable to Ponzi, especially if promising any sort of outsized interest returns through middle-men services. The 'scientific, peer-reviewed philosophy' statements appear as marketing, with peer review appearing to be crypto tech conferences (not itself a poor measure), not the classical 'peer-review' format the statements would allude to.
Becoming successful requires a stubborn conviction that you can make your own ideas a reality. Doing well on tests requires the ability to re-enact the ideas of others.
They're both valuable skills of course, but they should be recognized as being distinct from each other; and measuring "intelligence" as some one-dimensional modality is a spectacular failure to do so, IMO.
This idea is also to an extent analogous to the tradeoff between exploration and exploitation in reinforcement learning (AI).
I’ve known a fair number of very successful people, and the strongest theme among them is ignorance of the challenges they had committed themselves to, perhaps even a stubborn disability to heed warning that there is yet another layer of the onion after years of peeling. I really don’t think it’s some kind of dreamer attitude or deceptive nature; maybe just blissful ignorance. In most cases, it starts with a simple idea that everybody had, e.g. a CS50 class project that everybody else did, but they all saw the technical and legal and market and competitive challenges, and most importantly and predictably failed to answer the biggest question, “why me?”. Somehow the person with the worst answer to that question is always the one that fails to ask it. Despite this observation, none of this has changed the way I see things in my real life. I have read that the parasympathetic nervous system causes people to pay attention to details and become overwhelmed by risks. Chronic pain or fear or lack of sleep or overtraining or infection can do that. I recently had a lidocaine injection into a painful joint deformity, and the rest of that day I experienced astonishing positivity. So something tells me that it’s not really all that complicated, but simple things like happiness and good health and relationships are far more rare than we would expect, at least among those with curiosity for difficult subjects.
Perhaps that shows that these students figured out they're not so great at computer science, but ended up knowing a lot of people who were - and how to use their skills to make a small fortune.
You can buy problem sets and answers online and I suspect that's one way people might cheat especially on homework. A lot of schools just reuse textbook problem sets or have a bank of questions they use for problem sets and exams, especially for larger classes, and people can buy all the answers online and find the ones they need. I think during the Covid pandemic when things moved online that really accelerated but previously it was a grey area since it could be just a form of studying.
It's sad because I think a lot of those classes are curved and people who don't cheat probably have a disadvantage. Despite putting more effort in and maybe even knowing more than other students, people who don't cheat could end up at the bottom of the curve but might have actually fared better under different conditions.
As someone who studied pedagogy for years and quit due to an immense frustration with exactly this — how broken the system is — I would encourage you to entertain the thought that maybe, you as a person who is almost in all cases not a teacher, nor someone with any experience apart from once having been a student, do not have a good understanding of how exactly this system should be fixed, and that it’s not broken for fun but because there are some very difficult unresolved issues.
People love to rant about how bad tests are. “We just study for the tests” and so on. And yet this complaint seems to be international. Curious, isn’t it, how all these systems seem to fail in the same way?
In the case of testing it’s because you choose to focus on the obviously bad thing (current state of testing) rather than the very complex and difficult question behind it: HOW do you measure knowledge? And when you decide how, how do you scale it?
These are very hard questions, and it’s frustrating to read the phrase “we need to fix the system” because yes, obviously we do, but agreeing that things are bad isn’t the hard part, and probably input from people who have never worked in the field is of pretty limited value in how to resolve the hard part, and will not do much more than annoy teachers even more.
So what’s the solution then? Well, maybe we should start by rolling back this common conception that when it comes to schools, everyone’s opinion matters an equal amount, and then listen to the teachers and academics.
Cynically, this will never happen because reforms to battle educational issues in any democratic society usually takes more than 5 election cycles to show obvious results (and when the bad results start stacking up current leaders will take the flak regardless).