
Launch HN: Peergrade (YC S17) – Student to Student Feedback - utdiscant
I am David of Peergrade (<a href="https:&#x2F;&#x2F;www.peergrade.io" rel="nofollow">https:&#x2F;&#x2F;www.peergrade.io</a>). Peergrade is a tool for educators to run peer feedback and peer assessment sessions with their students. Peer feedback is a way to train skills like critical thinking, constructing arguments and how to give and receive feedback - all skills that we believe will be even more important in the job market of the future.<p>I teach a university course in data science at The Technical University of Denmark. Two years ago my course suddenly grew from 20 to 150 students (I changed the title to include &quot;big data&quot;). To overcome the sudden explosion in students, I started working on Peergrade. The idea was that I could save some time on grading papers, the students could write better feedback than me since they had more time per paper, they would learn from reading and assessing each others work and I could reallocate the time I saved to more impactful things like mentoring students.<p>One of the things we heard from teachers that had tried peer feedback before was that it was challenging to motivate students to write helpful feedback. One of the ways we try to solve this is by letting students assess the feedback they receive, consequently giving a clear incentive for writing helpful feedback. To combat gaming and unfair assessments, students give feedback anonymously and they can flag feedback which they disagree with for moderation by the teacher. Since students are able to flag and counter-argue the feedback they receive, they also spend more time reading and thinking about the feedback.<p>Today Peergrade is used in institutions around the world, all the way from 4th grade to higher education, across all subjects and with class sizes from just 5 students to thousands of students.<p>We look forward to answering any questions you might have about our product, tech-stack or vision for the future :).
======
huac
Cool work! I think peer review has a ton of potential and is certainly
something grows better with scale.

My favorite course in college used a similar approach to scaling grading,
described (very) briefly here:
[http://commons.library.upenn.edu/sites/default/files/picture...](http://commons.library.upenn.edu/sites/default/files/pictures/events/2015fader.pdf)

~~~
utdiscant
Thanks for the comment! I looked at WHOOPPEE a while ago and it is great work.
We have thought a lot about doing rankings instead of grading/assessing each
piece of work individually with evaluation criteria.

On one hand, I believe that it is easier with rankings to get more numerically
correct grades. Humans are bad at rating things, but binary rankings are
doable.

The challenge with rankings for peer review, is two-fold. The first problem is
that a ranking does not directly tell you where you are lacking or how to
improve. The second problem is that quality is generally not 1-dimensional (as
the ranking would be).

These problems can be solved by letting students rank papers according to
multiple criteria (consequently generating multiple rankings) and then
combining these rankings. The challenge with this solution is that doing
multiple rankings often ends up becoming a quite time-consuming and
overwhelming task. Also, from the research I have looked at about this,
students generally find it hard to pick one or the other, often requesting the
"I don't really know" option making the problem somewhat less trivial.

------
jbj
Congrats David! I was in the course last year when we were ~150 students. I
really enjoyed both giving and reading the peer feedback from others. I was
also surprised how there would be this dynamic of exchanging tips and tricks
on what could have been done better between anonymized peers. All the best
with the future of peergrade! /Jakob

~~~
malthejorgensen
Hi Jakob – thank you – we're thrilled to hear that!

We, at Peergrade, strongly believe that there's a lot to be gained if students
share more of their experience (knowledge gained, questions, struggles) when
engaging with the class material. Through sharing those learning experiences
students will learn faster and our aspiration is that Peergrade will be the
facilitator of exactly that sharing and improved learning.

------
lelandgaunt
Great work, I like the idea of not requiring students to have an account to
login, but instead use a code. I also feel as if this is a great way to
introduce the youth to the concept of "Code Review" which I believe is
critical to secure coding and their future.

~~~
utdiscant
Thanks! We originally built Peergrade just with email/password login because
we started with a focus only on higher education. When we got the first users
from schools that didn't have emails, we had to rethink the login flow. Using
a code has a lot of advantages since it makes it a lot easier to get started -
but often students realize later (for example a week later) that they would
love to have saved their feedback, which is now impossible.

I totally agree about code-review! I actually wrote a post on Medium about
teaching code review with peer feedback:
[https://medium.com/@davidkofoedwind/teaching-code-review-
in-...](https://medium.com/@davidkofoedwind/teaching-code-review-in-
university-courses-using-peer-feedback-5625fe039f2a)

------
the_wheel
This is great, congrats! How did you get your first few customers? How do you
plan on monetizing? I realize Peergrade is free, but, as an educator, do you
have any advice with respect to selling into K-12 vs higher-ed?

~~~
malthejorgensen
Thank you!

First customers were through personal network, we then quickly transitioned
into cold emails to universities in our region.

We are already making money, primarily through sales to institutions in
higher-ed. The longer term plan is continuing down that path, with institution
sales making the majority of revenue, but with individual teacher plans
available as well. The freemium model is to get teachers hooked, and then have
their school, department, or university purchase as they see increased use and
a need for integration with their existing school system (LMS). This is a path
we have seen for a bunch of our current customers. Especially integrations
seems to drive institutions to buy.

K-12 vs Higher-ed is a complex beast, but an analogy can be drawn to Consumer
vs B2B. K-12 is high-volume, low price, low touch deals and have some similar
characteristics to the consumer market (virality + simple products). Higher-ed
is lower-volume, larger deals sizes, but more hands-on sales. Currently our
sales and marketing efforts are focused on higher-ed because it does seem to
have the best ROI.

------
p_g_p_g
What type of Quality Control features do you have in place to ensure that
peers take the time to provide quality feedback? I have often found that
incentives alone do not suffice.

~~~
utdiscant
Great question! There is a few different things we do. First of all, students
can flag parts of the feedback for moderation by a teacher. Secondly we
calculate the inter-rater agreement between reviewers on a submission - that
way teachers can look at submissions with a high disagreement.

Of the slightly more complex parts, we have some algorithms that can detect
biased students (similar to this
[https://arxiv.org/abs/0711.3964](https://arxiv.org/abs/0711.3964)). Another
simple trick is that if one reviewer says your work is good, another that it
is bad and you don't flag the feedback saying your work is bad, then the
review stating your work is good is probably too nice.

The final thing we are working with is making sure that students are honest
when they assess feedback quality. To do this, we use some natural language
processing to estimate the quality of feedback so we can see if our model is
in agreement with the student that reviewed the feedback.

Basically, we do a lot of automated outlier detection and highlight outliers
to teachers for manual moderation. Not all of this is in production yet (in
the end, students are actually quite honest), but all of it is working in non-
production environments on actual data.

~~~
lurker-
I used it for one of my courses a little over a year ago; I don't recall
seeing any flag, but maybe that's a newish feature.

My primary problem was that the professor often created a question or two that
9/10 groups hadn't addressed, and failed to add questions addressing the value
of their work. Meaning a group may have created a product worthy of a 95+/100
score, but failed to address some poorly made question by the professor.
Although you could argue that flaw lies upon the professor rather than your
product :)

Another problem was that it was very easy to figure out the owner of the
assignment, which basically meant that many of the students would give top
score to their friends regardless of the quality of their work.

Finally, I hated the psychology that played into effect when grading the
peers. Especially because I couldn't help but think that the professor would
be able to associate the comments/scores to me (resulting in me giving overly
positive scores to terrible assignments)..

That being said, I think what you're building is really great, and if I were a
professor then I'd likely want to use it myself.

~~~
utdiscant
Thanks for your comment! We love feedback (part of our DNA) ;).

One of our biggest challenges is making a product that is flexible for
teachers but which is hard to misuse. Bad assessment rubrics give bad student
experiences, and unfortunately we can't make them for the teachers. We are in
the process of writing a small booklet for teachers on how to make rubrics
better.

The challenge with anonymity is especially a problem in smaller classes. We
make sure to strip metadata from the submissions, so if people don't put their
names in the documents, the only way to determine the author is to know the
content or the writing style. It sounds like the challenge in your course was
that people were working on different projects and that you knew who was
working on what?

The psychology part is the most tricky. We want to make a setup where the
incentives are correct, but where people don't feel scared of giving feedback
to each other. The way we primarily do this is through feedback-on-feedback
where receivers of feedback are asked to review the quality of the feedback
they received, and then we let the teachers moderate this and students flag
problems. I don't think we are completely there yet, but I can definitely see
that we are making progress on solving this :).

~~~
lurker-
Yeah, it was a group project with a lot of presentations, so could instantly
recognize who the project belonged to just by looking at group number/easily
recognizable pictures. Not sure if there were other identifiable information..
but you say peergrade doesn't display this, so I guess here it's again the
fault of the professor to let students upload entire documents rather than
letting them copy parts of the text..

That being said, I do seem to recall that the uploading process was somewhat
simplified, meaning that we had no choice but to upload everything together,
rather than splitting the uploading part into different sections. I mean, if
the assignment was a research paper, then instead of uploading it all together
(usually in the form of a pdf), it could be divided into different sections,
e.g. introduction, related works, etc. (with only text it would be much harder
to recognize the author, and if person A reviewed introduction of PersonB and
related works of PersonC then it would be even more difficult to identify)..
this could also go a long way towards solving the poorly made questions by the
professor since it would be guaranteed that all the questions were relevant
and completed (not sure if that makes sense, and I wouldn't be surprised if I
remember wrong and all of this is already possible ^_^)

About feedback on feedback, it sounds good in theory, but I recall many
wouldn't bother looking at it (especially because most would just leave "ok",
"good", "I agree" etc as their grading comment), and those who did would just
skim it without taking any actions to correct poor feedback where it was clear
the person grading it hadn't bother to read the content in detail. But I of
course have no idea if our behavior was the norm rather than the exception :)

In fact, one of my biggest issues with grading in general (second only to
being graded based on a stupid test/presentation at the end of the semester
rather than the assignments completed throughout the course) is that teachers
will often be biased, and I think you have a real shot at fixing this by
ensuring that even the teachers don't know the author of the assignment they
grade.

~~~
utdiscant
You are right, at this point we only allow students to submit one piece of
work. The reason is to limit confusion on the reviewers side and to ensure
that proper context is kept - it is not always possible to review the
conclusion without the introduction. That being said, we are looking at
alternative solutions.

In relation to the feedback on feedback, it is again a question of how the
teacher implements it. In my own course, the quality of feedback a student
provides counts for 30% of their final grade, that makes incentives very
clear.

I totally agree with teacher bias being a huge problem. We have a lot of
teachers telling us that they are surprised when Peergrade highlights their
own personal biases! We are building a way for teachers to also grade people
anonymously to further this :).

------
s1mpl3
Congrats! I remember watching your YC experience through Medium Series. The
journey seems super exciting :).

~~~
utdiscant
Thank you! Happy to hear someone was actually reading that. The experience
with using Medium Series as an author was not very pleasing to say it the
least.

------
malmsteen
I've heard of a few other guys doing that... or maybe it was you already :-p.

Why are you better than the others ?

~~~
utdiscant
Hope it was us! I think we are better for a few reasons.

1) We are feature-wise the most complete peer feedback solution on the market.
This matters because very few classes and courses are the same and teachers
like flexibility where product fit to their teaching.

2) Secondly we spend a lot more time thinking about the user experience for
teachers and students in our product. Getting accurate grades and having a lot
of smart features is of course important, but making it easy to use and setup
is more important.

3) It seems that most other peer feedback solutions are built with a narrow
mindset about just scaling grading. We are building Peergrade with a focus on
feedback over grades and always looking at improving the way students interact
with and learn from each other.

Hope that was an answer to your question :)

