Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still have yet to hear anyone propose a workable solution for the ultimate problem of remote assessment in CS: hiring someone to just do your work for you. Everyone keeps saying "oh just make it project-based" like I didn't have a student last fall who hired someone to do his final project. I only detected him cause the person he hired sucked and left bread-crumbs; how many students did I not detect because they hired competent people?


Some companies, like Pinterest, use systems like "lytmus" to prevent cheating. It's a monitored VM box where you do the entire assignment inside the GUI VM. I'm sure certain trust factor guarantees prevent you from outsourcing the assignment overseas.

All that said, it's an awful system and I ragequit the take-home midway through. It's buggy, slow, lacks useful hotkeys, and basically requires you to become accustomed to a wiped Linux box without any of the tools you would typically use. Sure, you can install them - but it's a timed assessment, for crying out loud.

Finally, I suppose nothing stops you from logging in with your credentials on your authorized machine and then physically handing that machine to a paid agent.


This is the same reason I offer candidates remote screen share-based technical assessments. Part of what makes an effective engineer is mastery over tools: by putting you into Coderpad or Hacker Rank type tools, I'm handicapping you.

Not everyone opts for that (maybe their local environment is messy, or they only have a locked down work device, etc), but I always make it an option.

I've learned many things over the years watching other experts in their local environments as an interviewer. New packages, new shortcuts, new helper apps, ...


Ha.

I'm a Spacemacs user and I had to do the remote interview for my current gig in Google Docs. That was a bit frustrating.


> I'm sure certain trust factor guarantees prevent you from outsourcing the assignment overseas.

Who said anything about overseas? Hire your classmate to sit down at your own PC to do it. That's the Hard Problem.


I heard some companies, possibly only in India, required a webcam to be on the whole time and for you to rotate it around the room beforehand.


Wilfrid Laurier University in Canada is trying to make the students do this for exams this year.


Yeah and this stops me pre-recording it how? I have a USB capture card that appears to the host machine just as a UVC webcam.


When I took my RedHat RHCSA exam, the proctor was a live human, and I needed to move the webcam in time with their instructions. It was a basic around-the-room pass, but it covered anywhere that I could have stashed materials with which to cheat. From there, even if I did have external materials, it would have been pretty obvious that I was using them because my eyes would be focused offscreen for an extended period of time.

Just the knowledge that another human being was proctoring the exam made the idea of attempting to cheat pretty far fetched. My body language would have given it away, and it wouldn't be easy to transition smoothly from the interactive setup to some kind of pre-recorded loop without it being easily detected. The kind of person who could successfully pull that off... well, could also probably pass the test on their own merits.


I suspect people are making this more complex than it needs to be. For most of these tests the difference between a good score and a poor one are just a few questions.

So, the first step is just to actually take the test. The cheating bit would be having someone monitoring what your doing via a reasonably hi-res hidden camera/etc pointed at your screen/etc. Then all you need is a channel for that person to communicate with you. Be that a tiny bluetooth hearing aid in one ear, a phone/screen taped hidden on your leg/etc or to your existing screen. Basically somewhere not visible to any camera's monitoring you.

So, when the person monitoring you detects you have answered something incorrectly they send you something to the effect "question #5's correct answer is Y because A,B,C". Then later you get a moment of insight and go back and fix the problem. In theory, unless your completely clueless the difference between a 2 and a 4 is just a couple questions so this would only have to happen a couple times during the whole test. Again, unless your completely clueless, if the proctor notices you fixing an incorrect answer and asks "why did you just change your answer" you give them something reasonable sounding. Heck for some of them, it might not even have to be the right answer, its completely possible for someone to get the right answer the wrong way in a multiple choice section. AKA, "I guessed on that one the first time and A & B seemed far fetched so I just picked C over D, but then later I questioned my assumption and decided it was B & C that seemed far fetched so I picked the remaining answer which was D rather than A".

Anyway, I'm not sure the screen is needed. For a test that is 100% multiple choice, a buzzer in your shoe would be enough if it went off whenever you selected a wrong answer, and then pulsed a code to indicate the right answer.


Did they ask for a photo of your laptop? Particularly behind the lid? That’s the one place your webcam can’t see so it seems like it would be the ideal spot to hide or mount a cheat sheet.


Ah, no but the exam was taken at a kiosk, not on my personal machine. I had only a few minutes in the room to get myself situated. The camera was USB, so I moved that around while the laptop remained stationary.


Could you have hidden a cable to another monitor/keyboard in another room?


Ask a few questions, including at least one that's unpredictable.


"Touch your left cheek. Now show me the ceiling. Smile at the camera. Blink three times quickly."

If you can pre-record stuff like that without knowing what the sequence of requested events will be, I'm impressed.


Oooh, it would be cool to deepfake oneself on this. On an intentionally downsampled video feed it could be quite convincing.


While I would agree that the environment is torture to use as a general-purpose screening tool, I’m sure there are jobs where the ability to get work done in a base Linux install is an asset :)


Wow, when did Pinterest implement that? I don't remember anything like that when I interviewed with them, but it was a couple years ago.


2017 through early 2018. I think (and hope) that they've changed the process.


Ah, okay, it was late 2018 when I interviewed there. I guess they got rid of it since.


When your students submit the project, have the landing page give them 10-15 minutes to make one tiny change and resubmit. Pick something that would be trivial to do in seconds for a student who did his own work, but would cause a lot of stress for someone who outsourced.

Save the 1:1 interviews for students whose original program was great but whose modification went down in flames and you'll likely be talking to your outsourcers, plus identifying students who maybe could use some extra help even if they did everything themselves.


Are you prepared on the basis of a student not successfully making the change to fail that student and/or expel them from their university? Sure you can invite them in for an interview, but what happens if the student just denies any accusations.

I am also skeptical you could find appropriate trivial changes to request. Students, and particularly beginner students, may write very unconventional code that may not be as easily modified as a well-thought-out solution.


If its something like modifying a string literal then I can’t see the problem. If the program is too obtuse to permit that then its a major concern on its own.


If it's just modifying a string literal I suspect most would just search for that string and have little trouble even if the work is outsourced. Perhaps within a certain set of requirements there are modifications you could ask for that are not trivial and could be done in fifteen minutes but I think this is pretty hard in the general case and depends a lot on code quality.


I guess you could take those students and review their cases personally by taking to them.


This will work for one semester. The next semester everyone will know about it, and keep their outsources on staff till after grades are given.


> I still have yet to hear anyone propose a workable solution for the ultimate problem of remote assessment in CS

What about 1-on-1 interviews? Ask people the questions face-to-face, ask them to talk you through their answers. That's how advanced degrees like PhDs are assessed.


This is also what my team does for professional interviews. We give them a take-home test with some simple problems, but we actually care much more about how they talk through their solution with us than the solution itself. We don't even care if they copy/pasted stuff from Stack Overflow.

What we care about is that they can demonstrate they understand the problem and solution. The code is just the starting point for that conversation.

We have hired people who submitted failing solutions because they were able to think through the problem on the spot when we told them it failed and asked why.


I'm 1-on-1 interviewing my 59 Algorithms students this semester. 20 minute interviews, and I have 2.5 TAs to help offset the workload. Starts on Wednesday, I'm absolutely dreading it. This would never scale to my 180 Intro students that I'll be getting this fall.


Recruit proctors from big companies.


What are the 59 algorithms?


59 students of algorithms, not students of 59 algorithms, I would guess.


I would definitely take a course entitled "59 Algorithms". What a fantastic idea for a second course in algorithms! Looking at 59 of the most important algorithms (in the opinion of the professor). It's even more intriguing since it's not a round number, so you know the prof didn't add a bunch of filler just to get a nice number.


"59 Algorithms" would be a fantastic book title. I'd buy it.


Yes, thank you for the clarification :)


In my experience, this is definitely the best way to evaluate how competent someone is in an area. If they can easily hold a casual conversation and express / defend some sensible opinions about an area, it's likely they know what they're talking about. Obviously, you should ask technical questions in the interview, but it's the little things in the conversation that say a lot. Unfortunately, this approach doesn't scale to the size of the AP Exams.

Dijkstra exclusively gave oral final exams on a board. Allegedly, he stopped one of his former students after the first question and told him something to the effect of "you clearly know what you are doing by how you answered that, so you'll get an A, but your handwriting is atrocious." He spent the rest of the exam time making the student practice penmanship.


Djikstra - not a professor I would want to emulate.

I'm unconvinced that oral exams provide the most secure and consistent form of assessment. I'm not even sure how to evaluate such a claim, though, so it may have to stand as an opinion.


There are entertainers streaming with appearance-modifier and voice-modifier filters. You can't know you are interviewing the person you think you are.

I don't think there is an off-the-shelf product for cheaters, but it's only a question of time. Especially now that I mentioned it.


I think ideally you should know your students well enough to be able to judge whether they're giving you answers which correspond their ability and their own take on the topic. If you've been tutoring them up to this point in the year you should know them pretty well.


I don't know if you have been to college recently, but I had a handful of intro classes with 200+ students. I never spoke to those professors once.


Those aren't classes. They are class theater.

They probably failed even by the simple arithmetic that the time required to make a minimally good faith effort at applying a grading rubric is greater than the time a teaching assistant is paid to "assist" in the grading.

Your teaching assistants almost certainly scaled back their time grading your assignments to fit their allotted hours. If any of those intro classes fulfilled a writing credit you can probably start with available undergrad writing credit requirements, figure out how many hours your TA got paid, and look back at the number of students for a given TA to figure out just how much money you and/or your parents lost on the deal.


> figure out just how much money you and/or your parents lost on the deal.

If you consider college tuition to be the money you pay to be educated, you're definitely right. Personally I think of college as something like a licensing or certification.

I learned twice as much at my part time programming job than I did in class (but classes where instrumental in getting me through technical interviews). But, once I had my degree I instantly had multiple 100k+/yr offers in a moderate COL area. For me, that is a good deal (but still a huge hassle).


> If you consider college tuition to be the money you pay to be educated, you're definitely right.

If I think of class theater in an education that led me to 100k+/yr, that is a good deal.

If I think of class theater in a certification that led me to 100k+/yr, that is a good deal.

That it is a good deal doesn't change the the fact that class theater is a shady practice.

It's also detrimental. But since we don't have the tech to do speed runs against the versions of ourselves who took non-shady intro classes it can be easy to shrug.


Sadly, certification is more or less our job now in higher education. Wasn't how the system was designed, and isn't what most faculty think about. But it's really becoming accurate.

And I can't do that without proper assessments!


Yes, you would hope so in smaller, more specialised classes. However, in a ~200+ person class, you just won’t have that kind of relationship with each student. Additionally, this completely breaks the anonymous marking that many institutions strive for.


1on1's take too long and don't InternetScale^tm


This is how all exams at the university level used to be conducted. Not really practical however when you have large class sizes.


It's definitely a problem, even on CollegeBoard's current platform. Normal AP exams/SATs require photo ID validation at the exam site, and are run by school admins who will notice if someone is out of place. I work as a tutor on Wyzant and have been asked to "tutor" during the exam as well as to just do CS Principles students' portfolio assignments.

While some might argue my approach is too aggressive, I report every account that messages me asking to cheat to Wyzant as well as the appropriate organization. My goal is to make it simply not worth the attempt to cheat (obviously only if they explicitly ask; I don't report when it's a possible miscommunication). Just about all of them have had their accounts deactivated.

If technology access could be assumed, I don't think it would be unreasonable to ask students to upload a picture of themselves with their photo ID, as well as some sort of validity check, like a unique code only provided to the device they're taking the exam on. Obviously, that can't be implemented because of equity issues. Ideally, a secured assessment would be done on something similar to Pearson Vue, ProctorU, or the multitude of other online proctoring services. Unfortunately, that's not an option due to both scale and technology access (not everyone has a compatible device, I'm sure many students are taking the exam on their phones).


You could make them give a video presentation about their project to show that they actually know how it works, not foolproof but it helps. I think it is pretty much impossible to completely remove that possibility though. I've had lots of courses where the majority of my mark is assignments and projects, and I'm sure some students pay others to do it for them, and the professors just accept that as a possibility. I don't think it is worth providing a worse experience to the majority of honest students to prevent a small minority of dishonest students from cheating.


Do you have them upload the source code of their projects? If so, after the upload could you display the project’s file tree along with a last minute change their hypothetical client has requested and give them a minute or two to select the file(s) that would need a pull request in order to make the change? They don’t even need to make the change, just identify where the change would happen.

Someone who wrote the code themselves will know right away and someone who purchased a project won’t have enough time to sift through the code to figure out the answer.


Well, it's not an easy suggestion, but certainly an interesting one. I'm going to think more about this one. Thanks for providing a new thought on a hard problem!


The problem reminds me of that old story about the group of kids who missed their final exam with a flat tire. If you can figure out the modern CS equivalent(s) of asking your students “which tire” about their project there’s probably an easier implementation. Good luck!


An interesting way to assess your students, if you have time and a reasonable amount of students, is to do what the IB organisation required my Spanish teacher to do: a one-on-one oral exam with presentation and follow-up questions from the teacher, all recorded and sent to the IBO grading board.


I'm 1-on-1 interviewing my 59 Algorithms students this semester. 20 minute interviews, and I have 2.5 TAs to help offset the workload. Starts on Wednesday, I'm absolutely dreading it. This would never scale to my 180 Intro students that I'll be getting this fall.


But doesn't the whole 101 CS classes really not scale in general. I remember having code submission submitted to to some anti-plagiarism system back in the 90's.


I like to the curriculum I'd been developing for the past few years was scaling pretty well. Most of the technology is reaching enough stability that I can give a ton of automated feedback without killing myself, I had a pretty low DFW rate, most students could complete my final programming problems in a proctored exam situation, and evaluations came back high. I wasn't happy with my project rubric scores, but I think I was on the way to progress with that and it was partially just too high expectations (I just want students to test and decompose more). I'm interested in how we can scale introductory computing experiences, and I think I was on a good path before they yanked out my proctored exams out from under me :)


The only way is in-person student code review or in-class projects. You could go over their code with each student and ask poignant questions that can only be answered if the student wrote the code themselves. But I'm not sure how feasible that or the in-class project is for most teachers.

Also, anyone remember the story of the programmer who subcontracted his work to some offshore freelancer for a fraction of his salary? He just sat at his cubicle goofing off all day while the freelancer did all the work.


Could something like pair programming work to reduce the risk of hiring out the whole project?


Reduces the risk, but doesn't prevent it.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: