- Data lab formally verifies all of the submissions and gives you an exact failing test case if the submission isn't correct.
- Malloc lab is run competitively. You don't need to compete to do well, but you do if you want to do great. And the competition drives students to great feats. I know people who rewrote their inner loops in assembly to get better perf. Another built splay trees to get lower fragmentation. They both had comfortable A's in the class at the time.
- The automated grading of assignments meant that the class could scale to many hundreds of students while the TAs could dedicate their time to office hours. Turnaround time for exams was sub-one-day.
- The instructors are wonderful (and well dressed!) people and they ran the class expertly.
What's the current status on accurately grading mistakes? From the exams I remember (Fall 2015), the exams autograded perfect answers well, but gave similar marks to nearly correct, but still wrong answers as complete gibberish. I felt that unless I had the answers exactly correct, it was hard to demonstrate my knowledge of the topic.
In retrospect, I think it was crucially important for students to learn the price of their mistakes. Most of my previous classmates are reputable engineers now.
I was TA for 1911 (I think?) around 2008-2009 and don't recall any automated grading system. It would have been so nice too, as labs would end up with students trying (struggling!) to complete the lab, get the lab graded, and ask questions about lecture material they hadn't yet understood.
The grading would inevitably take up almost all the lab...
We didn't have automated grading for the labs then either, but we did for all the assignments. Still manual marking on style though, which to this day I have complex feelings about.
With style I always felt it was easy to say if it was bad or good, but hard to justify sometimes; 'good style' often just means 'I can follow this easily as I read it' and that becomes subjective based on how you like to read code.
Even so, it was always obvious when a student had spent time on their program, and gone back over the code to rewrite or clean it up - those tended to get the good style marks.
>- The automated grading of assignments meant that the class could scale to many hundreds of students
When you have things like this why are we paying so much?
You can think of ways with simple technology and social organization to scale some version of the mythical social thing to much larger numbers of students, and to be available to anyone who will use that sincerely, not some lottery of acceptance and whether/how you can manage to swing the huge tuition (possibly taking on crippling debt for the rest of your life).
I look forward to my degrees becoming officially worthless, with everyone having the same or better education easily available to them. We're already getting there, but we need to keep going in this direction, and to dispense with pretense. Things like better scalable feedback help.
Lots of degree have become worthless, with the tuition fee keep going up........
One student asked about security - and we got into a great discussion about permissions and what havoc he could and couldn’t pull off with the restricted user account. I encouraged him to figure out how to write a fork bomb - which he got working after the test. He was nervous about it because he didn’t want to get in trouble. With some reassurances he dove into API documentation and got it working. My computer totally died and needed a reboot. It was a great little teaching opportunity that I’m glad I didn’t pass up
(in my defense you probably should have ulimits set on a server where students are going to be logging in and working...)
Apparently some student set up a mmorpg server and then made some enemies...
If my laptop were fully protected from that attack, it wouldn't have been as much fun for my student to wreck my machine in front of everyone. And so he might not have done it, and my class would have missed out on a beautiful demonstration and discussion of practical security engineering.
I made the mistake of getting the Global edition, because of its considerably less cost, and because I couldn't afford the North American one - it was only after that I checked out the book site, where the authors mention that the global edition is chock full of errors .
I don't blame the authors, nor even the people who were responsible for 'the generation of a different set of practice and homework problems'. I can get printing the book in B&W, reducing paper quality, and publishing as a paperback to cut costs, but it's baffling why the publishers compromise on the actual quality of the content itself.
Amazon is full of similar 'PSAs' about not buying the global edition .
If you want some exercises to help you learn the foundations of computer systems, I honestly cannot think of a better resource than this!
Also worth noting the plagiarism detector actually did its job, at least in some of the most egregious cases that I heard of.
Dockerfiles might be helpful and easy to keep updated. Alpine Linux or just busybox are probably sufficient?
The instructor set could extend FROM the assignment image and run a few tests with e.g. testinfra (pytest)
You can also test code written in C with gtest.
I haven't read through all of the materials: are there suggested (automated) fuzzing tools? Does OSS-Fuzz solve?
Are there references to CWE and/or the SEI CERT C Coding Standard rules?
"How could we have changed our development process to catch these bugs/vulns before release?"
"If we have 100% [...] test coverage, would that mean we've prevented these vulns?"
What about 200%?
All thanks to quantum computing :)
(Even code with 100% branch coverage may have common weaknesses like those that these (great) labs have students exploit)
For the bomb one, are there protections for taking the binary and running it in an isolated environment?
Running it through a debugger is easy enough that this would be sorta unnecessary
213, man. It's a killer.