Hacker News new | past | comments | ask | show | jobs | submit login

This is an important problem. As part of interviewing processes, I have several times tried offering "take home work challenges". I had to stop after, having run the experiment several times, I detected plagiarism in about 30-40% of the cases. The risk is of plagiarism is real for any problem that's well known enough that the problem and solution can appear online, and detecting plagiarism is not always easy.



Wait, why'd you stop? Isn't this a great way to filter out those 30--40% of people?


I like this response. I think that one of the best things about a take-home challenge is that during the interview, you can then say, "The way you implemented this function was interesting. Let's talk about why you did it that way." If he knows what he's doing (or made a weird solution), you get a great glimpse into his thought process. If he plagiarized, then he's garbage.

English teachers do the same thing for detecting plagiarism. Bring the kid into your office and talk about the essay. If he's completely clueless, someone else wrote it for him.


It's a neat idea, but ultimately, worrying about cheating or whether someone has cheated feels like a distraction. It suggests a test that isn't repeatable. I'm only confident in my ability to detect obvious plagiarism. Subtle plagiarism can exist in varying forms, such as reading an analysis of the problem and solutions. Some people who plagiarize will pass a thorough Q&A about their code, because they fully understood the explanation of the solution, but are getting an unfair advantage over candidates who worked out a solution themselves from scratch.

Overall, I get more value from tests that are constructed so that I can learn positive things about the candidate - as many opportunities as possible for the candidate to distinguish themselves. If the only reason for a particular approach was to provide the opportunity for immoral candidates to weed themselves out by committing obvious plagiarism (negative data), then there's probably a better approach that tells me more about the candidate per unit time.

If I was going to continue, I would use a problem that is (1) more representative of the actual work being done by the team; less of a puzzle (2) custom designed for the team or company; not a preexisting or well known problem. (Even candidates who don't cheat can have an unfair advantage on well-known problems if they have coincidentally encountered it before! Another reason to use unique questions.)


> Some people who plagiarize will pass a thorough Q&A about their code, because they fully understood the explanation of the solution, but are getting an unfair advantage over candidates who worked out a solution themselves from scratch.

This reminds me of the debate over whether performance-enhancing drugs should be allowed in mathematics. Why do you think it's so important for the candidate to personally invent every aspect of their solution? What if you just told people that it's ok to use external resources to solve the problem?

A class might give exams in any of these ways:

- exams only happen in class, where everyone can notionally be supervised

- exams are take-home, but you can't read the textbook while you're taking one

- exams are take-home, and you're free to read the textbook

There's cheating under all of those models, including the first one which takes the form that it does specifically to prevent cheating. The implicit goal (for the students) of model 1 is to make sure they've internalized whatever is being taught. The implicit goal of model 3 is to make sure that, even if they haven't internalized the material, they're capable of applying it. The implicit goal of model 2 is to make sure they'll comply with arbitrary, unenforceable demands (in this context, usually called "the Honor Code"). That might make sense if you're hiring a cashier -- but is it really your first priority?


At Princeton university, faculty members are not allowed to proctor in-class exams. (See the top of page 2 of https://registrar.princeton.edu/faculty-services/Conduct_of_... for a reference.)

Do you feel similarly that the implicit goal of this model is "to make sure they'll comply with arbitrary, unenforceable demands" and still not to test internalization of the material?


This is to my (1) as my (2) is to my (3). The applicable standard is even called "the Honor Code". I don't see why you think I'll see a difference. It's quite clear that making sure (or emphasizing that) the students are The Right Sort Of People is an explicit goal of the Princeton policy; see the final sentence of the relevant section of the document you linked. ("STUDENTS MUST WRITE AND SIGN THE HONOR PLEDGE IN FULL ON THE COMPLETED EXAMINATION PAPERS", caps original.)


It's less important than it sounds for a bunch of reasons having both to do with the actual "game" we are building and with our business model. It's not like we spit out a number for every player, and everyone with a number better than X gets a job offer.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: