Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Prolific (YC S19) – Quickly find high-quality survey participants
48 points by psb31 73 days ago | hide | past | web | favorite | 23 comments
Hey HN,

We’re Katia and Phelim, cofounders of Prolific (https://www.prolific.co). We help psychological and behavioral researchers quickly find participants they can trust.

We built Prolific because Katia had a hard time finding participants for her psychology studies during her PhD. She briefly used Amazon's Mechanical Turk (MTurk), but didn’t like the user experience and couldn’t get the data she wanted (UK participants). The fundamental problem we’re hoping to help with is better access to psychological and behavioral data. This is challenging in many ways: You have to balance the growth of a multi-sided platform, achieve high data quality, align incentives for all stakeholders (researchers, participants, ourselves, society), diversify the participant pool, to name some. We’re first-time founders and we’ve been bootstrapping our startup for the past 5 years during our PhDs.

Researchers build their surveys using Google Forms, Qualtrics, SurveyMonkey, Typeform, or another tool; all you need is a survey URL to get started. We verify and monitor participants so you can get data fast (most surveys are completed in <2 hours). Studies range from one-to-one interviews to surveys of thousands of people and you can retarget participants anonymously for follow up studies. You only pay for data you approve. Our business model is to charge a % service charge (typically around 25-35%) on top of rewards researchers pay the participants.

We have 70,000+ survey takers in Europe and North America (for distributions of demographic variables see https://www.prolific.co/demographics) and 100s of demographic filters (try our audience checker via https://app.prolific.co/audience-checker). This means we can find many target demographics for you. For example, you can filter for Democrats vs. Republicans, old vs. young people, students vs. professionals, different ethnicities, people with health problems, Brexit voters, and even collect nationally representative samples!

Anyone can sign up as a participant and start earning a little extra cash.

It's possible to do research using existing platforms like MTurk. Actually, over 50% of behavioral research is now run online, mostly on MTurk. But there are problems with the quality of data you get from existing platforms, and worse, problems with how the people who participate get treated [1]. Our approach addresses these issues. We think the key differences are: It’s data you can trust: We mandate a minimum hourly reward of $6.50, and often rewards are even higher than that. As a result, participants feel respected and treated like valuable contributors, providing high quality data. We comply with data protection regulation and have a range of technical and behavioral checks in place to ensure high quality data [2]. Demographic prescreening is flexible and free: You can easily invite participants for follow-up studies at no extra cost. You can get niche or even nationally representative samples on-demand. Prolific is built by researchers for researchers. We try to distribute studies as evenly as possible across our participant pool through rate limiting, so have less of a problem with “professional survey takers” than MTurk.

Our bigger vision is to build tech infrastructure that empowers behavioral research on the internet. The market opportunity is significant because any individuals, businesses, and governments would benefit from better access to rigorous behavioral data when making decisions. For example, what could we do to best curb climate change? What’s the best way to change unhealthy habits? How can we reduce hate crime and political polarization? The stakes are high, and behavioral research can help us find better answers to these kinds of questions.

Moreover, although we built Prolific primarily to help academics, we've noticed that businesses have been using the platform for things like market research and idea validation. This is a new market for us that we're excited to explore. We’d love to hear about any ideas, experiences, and feedback you might have. Thank you!

[1] https://news.ycombinator.com/item?id=19719197

[2] https://blog.prolific.co/bots-and-data-quality-on-crowdsourc...




I'm locked in to my current vendor, but I just wanted to share some of my thoughts about some pain points I saw in your sales pitch.

My experience is that most academics pay less than $6.50 per hour for an initial point of contact. For instance, I am currently fielding a survey (N > 5,000 per week, cross-sectional rather than panel) and we pay about $2 all-in, including the provider's charge, for a survey that's about 20 minutes. We'd fall afoul of your compensation rate pretty substantially. If we wanted to do some panel work and we needed re-contact, we'd definitely ramp up our payment quickly to help avoid attrition, but for the first contact, no.

If we wanted to be paying out your rate, we would almost certainly have to get an additional sponsor partner to piggyback some consumer research on top of our actual treatment. We don't want to do this, it's hard enough dealing with our primary funders. This makes me believe you are mostly targeting commercial / market behavior researchers. That's fine, but the pitch suggests you want academics. For transparency's sake, what is your balance of private and university clients as of today?

Second, even working with large sample providers, their pools are often fairly small. We requested 5,000 unique respondents a week for a year and found our sample provider could only guarantee a 1-2 month lockout. Obviously the effective pool you need to guarantee 5,000 * 52 is enormous and so we were expecting to have to negotiate on lockout, but all of this dances around the fact that sample providers are not transparent about the size of their pool and researchers like us are constantly worried about fraud both by sample providers and by respondents. How large is your pool?

Finally, this kind of quota sampling relies on our ability to weight the sample to the population. Weighting is totally permissible, but responsible weighting is going to cap the weight at the high end -- no one wants the one black Republican to skew the entire poll because they have a 150 weight on their observation (this isn't me getting needlessly political, this happened with the USC tracking poll last election cycle). In my experience, the hardest thing about quota sampling as opposed to the old RDD phone samples 20 years ago is that it's very difficult to get high education / high SES / high income respondents. High income respondents should be 10% of the population and they simply are nowhere near 10% of the pools that you get from standard recruitment methods. Can you speak a bit about a) how you recruit high income people into your pool; and b) what percentage of your pool would be high income (say HHI > $125k a year or so).

Finally, what is your pool attrition rate? If someone takes their first Prolific survey today, what is the probability they will still be taking a survey a year from now? It's nice that you have re-contact as part of your system, but the problem in my experience is not figuring out how to recontact, it's getting people to stay engaged for a long time.

Hope you have good answers to these questions, and that if you do, answering them here will help you get positive exposure from other readers.


> In my experience, the hardest thing about quota sampling as opposed to the old RDD phone samples 20 years ago is that it's very difficult to get high education / high SES / high income respondents. High income respondents should be 10% of the population and they simply are nowhere near 10% of the pools that you get from standard recruitment methods. Can you speak a bit about a) how you recruit high income people into your pool; and b) what percentage of your pool would be high income (say HHI > $125k a year or so).

A friend of mine runs into this periodically, with her response pools over-sampling on lower income and older individuals, with a lot of geographic skewing (she does local/regional research, so appropriate sampling at a census block and/or zip code level is needed).

I've worked with her several times to fill that gap, by leveraging Facebook ads for recruitment to balance out the deficiencies in her response pool. Ad cost + incentive tends to work out to the ~$10 range per qualified response. Which is too expensive as a recruitment method for her entire pool, but she's found fantastic to reach the otherwise unreachable demographic gaps she'd have. Different demographics respond better to certain incentive structures and ad copy than others, but in general it's been effective for capturing those hard-to-get groups.

Leveraging the same avenue for re-contact is also handy if your response pool is large enough to create a custom audience to target for follow-up ads. Even out of date contact info can be useful for this. Although your IRB (if relevant) may shut that down if you didn't account for it if the language in your initial contact didn't account for it, since doing this involves disclosing the subject's PII to a third party (Facebook) for creating the custom audience.


> what is your balance of private and university clients as of today?

Our clients are about 80-90% academic, and we find that there’s a strong move within academia towards fair payments. In the long term we think that this will show in the data quality and reliability of samples such that paying fairly will result in the best value. Regarding cost, we're about $2 total for a 15min survey right now.

> How large is your pool?

Lack of transparency about pool size is one of the most frustrating things about online panels. This is one of the reasons why we don’t report our total pool size, but only our ‘active and accessible’ pool (participants who have been active within the past 90days). As a result, you can expect > 50% of the numbers of eligible participants we report to actually take part in the next 24/48hours. We have ~70,000 active participants (20,000+ in the US). I expect we would be able to get a sample of 5,000 people in <48hours, but would only be about to repeat this for about 10weeks with unique participants. Our ‘pool’, as measured by traditional panel providers is approaching 500,000, but we don’t think this is a useful metric as the majority no longer respond to invites.

> a) how you recruit high income people into your pool;

To be honest, we haven’t cracked this nut yet and we expect our pool to be unrepresentative for high income people also right now. Ideas we have to help address this problem are to 1) introduce charitable donations for those who aren’t motivated by cash incentives 2) improve non-financial incentives (e.g. feedback on the impact your data is having) and 3) have highly targeted invites, we hypothesize high earners would be willing to help out if we need someone of their demographics in particular, and if this was communicated well. If you have suggestions, we’re all ears!

> and b) what percentage of your pool would be high income (say HHI > $125k a year or so).

According to self report we have ~2,000 participants from a HHI (>$100k/year on our screener), though it’s possible this suffers from slight inflation and we don’t (yet) have a way to verify income levels.

> Finally, what is your pool attrition rate?

Our annual retention rate is ~40% or so (e.g. if a participant takes part in a study, they’re about ~40% likely to do so again 12months later). There’s a balance between ‘refreshing’ our pool and keeping high engagement, and we’re working on keeping “naivety” high while allowing for studies over a long period!


> have highly targeted invites, we hypothesize high earners would be willing to help out if we need someone of their demographics in particular, and if this was communicated well. If you have suggestions, we’re all ears!

I have had some involvement in running charity events that target high net worth people. You attract them via their ego. Put their name on everything, use their name frequently when speaking with them.

And the kicker - never actually ask them to donate or take the survey or whatever. Just describe what you’re doing and why you’re doing it. They already know you’re talking to them because you want something from them, but if you ask them directly, they’re trained to say no because they have to do it all day. You need to let them make the decision to participate on their own.


That's super interesting, thanks for sharing. "The psychology of high net worth people"... lots to explore there!


This is definitely a space that needs innovation! How do you plan to handle the case of survey takers being real people but just skimming through the survey?

In my experience running research studies, this was the main problem with MTurk. Things like bots and blatantly junk answers were relatively rare and easy enough to detect and filter. What was much harder to deal with was the fairly high volume of users who just want to finish the survey as fast as possible. It's not so bad for short surveys, but anything over 5 minutes starts to have issues with low quality responses.

We had to introduce several control questions to check for consistency in answers and measure time to find outliers. But, it was not an ideal setup and there were still many survey takers we suspected were not paying much attention. The breakdown for our studies was something like 60% good quality answers, 35% low quality answers that are hard to distinguish from high quality, and 5% total junk answers. We ended up doing an in-person study where we got much cleaner results, presumably because people pay better attention when they feel someone is watching them.

Wages don't seem to do much for this. We were paying a relatively generous $4 per HIT, estimating 30 minutes per HIT when in reality in reality the average time was under 20.

So I'm wondering, are you able to share with us how exactly the "unusual data patterns" and "technical and behavioral checks" can help ensure quality in an ecosystem where users are generally motivated to 1) finish surveys as fast as possible, and 2) appear as if they are giving high quality responses when they are not?


It’s a great question and a tough problem. We have written a little bit about the problem of ‘slackers’ (participants with low attentiveness and engagement) [1,2]. The things we’re doing right now to address this problem is to 1) test for attentiveness and engagement before participants take part in real studies, 2) distribute surveys more broadly to reduce the % of “professional survey takers” in each study and 3) educate researchers about ways to use good attention and engagement tests [3] in their studies. We can then feed this data back into our system so we can iteratively improve on data quality.

I can’t go into too much detail about the attention and engagement checks we have built in, but if you sign up as a participant and pay attention you might spot some. In the long run, we think good feedback systems and high trust will be key so that our pool iteratively gets better and participants don’t feel as incentivized to cheat. It’s key to make sure that participants feel that their high effort responses get fairly rewarded (both financially and with social appreciation).

[1] https://blog.prolific.co/how-to-improve-your-data-quality/

[2] https://blog.prolific.co/bots-and-data-quality-on-crowdsourc...

[3] https://researcher-help.prolific.co/hc/en-gb/categories/3600...


This is really interesting. I've run into an issue finding qualified participants for user research studies many times. The companies focused on that space tend to be very businesses. Will take a look next time I have a survey!

A tiny thing I noticed on the site. If you change the estimated time on your "Study cost calculator" and hit tab, you are bounced down to the next section on the screen so you cannot see the results. I believe adding "tabindex=0" to the HTML on the result would solve this. I kept getting confused as to where the result went. :-)

Good luck on your project!

Edit: typo


> A tiny thing I noticed on the site. If you change the estimated time on your "Study cost calculator" and hit tab, you are bounced down to the next section on the screen so you cannot see the results. I believe adding "tabindex=0" to the HTML on the result would solve this. I kept getting confused as to where the result went. :-)

Hi, Leo here. I'm a frontend developer at Prolific. Thanks for the suggestion! I've just raised a pull request to fix this :)


Awesome, nice to see quick iteration!


It's very cool to see this idea. Most notably because I had it myself! I thought about it between 2014 and 2015. For me your post is a case study as to why I didn't go through with it and you did.

What I'm noticing is the following

My issues:

1. Not being able to find a good business model for the markets I identified

2. Not being able to identify all markets

3. Not testing it out or simply starting it

4. Uncertain whether I'd be able to amass a pool

My successes:

1. Identifying the same pain point for researchers

2. The ability to create the web app (not that I did this, but I was confident back then that I could and in retrospect, I think I was right)

It's funny as I'm currently prepping for becoming a strategy consultant (first round at MBB firm, never got a reply by FAANG in Amsterdam) and I'm already much better at the conceptual thinking regarding my issues -- though strategy consultants simply scratch the surface when it comes to identifying markets, so I have a lot to learn there. And since this is just all interview prep, I've got a long way to learn business skills anyway.

An unusual tip if you are 'suffering' the same fate as me. Look on YouTube on how to do new market entry consulting cases and you'll get an idea on how to think about it from a high level.

I still wonder what good resources are to learn marketing though.


Thanks for sharing your experience! At Prolific, we found that one good resource for learning about marketing/growth is this book called Traction by Gabriel Weinberg and Justin Mares. It explains 19 different growth channels and instructs you around best practices on how to experiment with them.

On a very basic level, the best way to validate your market & marketing ideas is talking to the right target audience. Often you can do this in your personal circles, but sometimes you need a less biased sample (in which case Prolific might be able to help). Good luck!


Very cool!

I signed up and will be looking out for surveys. Looking at the demographics, I noticed the survey panel is similar to that of Hacker News: white English speaking 20-30 year olds many who are in school. I'd love to hear about your efforts to have a more nationally/globally representative sample?


Hi, Katia here. Thanks for your question!

Right now we’re available to participants in OECD countries and it’s probably going to be another while before we can expand globally. Within the 36 countries that we’re in, we’re currently trying to work out what incentives might work best: How can we encourage the average citizen to sign up and take part in online research? What about very hard-to-reach demographics like professionals?

We feel that cash generally works well as an incentive, but it’s not everything. A lot of the time people want to contribute to projects they personally care about. So on Prolific’s end, it will come down to matching projects with the right participants. Another type of incentive could be to offer to participants that Prolific donates their earnings to charities on their behalf, and perhaps Prolific could match their donation?

We’ve recently launched quota-based representative samples for the US and UK [1], where we stratify based on sex, ethnicity, and age. A by-product of this feature is that it specifically invites niche demographics to participate, helping fill gaps. We hope that having more surveys available for demographics that are in the minority (e.g. certain ethnicities and age groups) will improve our ability to recruit these participants (although there’s a bit of a chicken-and-egg problem here).

Another thing we’d like to do is launch a mobile app for participants, so anyone can take a quick survey on the go (while waiting in line, chilling at home, commuting). Making our surveys more accessible to participants through different channels should help represent more people in society.

And then there’s user experience. We’re working on making our site as self-explanatory as possible, so anybody can sign up and start participating, even if you’re someone who doesn’t spend much time on the internet.

What do you think––what might be other ways to diversify our participant pool? Very curious

[1] https://researcher-help.prolific.co/hc/en-gb/articles/360019...


Congrats on creating a side hustle while completing grad school. This UX is far superior to MTurk, but as others have suggested, the price point is definitely higher. A few questions:

1)"You can easily invite participants for follow-up studies at no extra cost." So, I can re-contact them through your platform OR I gain access to their contact info?

2) What % of customers are repeat buyers (multiple surveys)?

3) What if I want to recruit childless individuals? The current audience-checker doesn't seem to offer zero as an option.

4) How automated is this process for you? What % of customers require a touchpoint or multiple touchpoints?


Thanks! We're actually cheaper than MTurk for the same participant reward. We take a 33% fee, while MTurk takes 40%.

1) You can run follow up studies through our platform using anonymous identifiers we provide.

2) ~40%

3) We have many more filters when you create an account. The "Number of children" filter is actually a followup to a "Do you have children?". If you select "No" you'll see we have ~ 38,755 active participants who are childless.

4) Good question, I don't have good stats on this right now. The process of setting up a study is mostly touchless for us, especially for 'expert' users who have run online research elsewhere. Support mostly consists of billing issues, custom filtering, and complex studies (we sell to universities, so billing can be complex). We're working on improving onboarding and making the whole process as self-serve as possible.


How are you selling to universities? Do you mean individual labs, or departments, or enterprise level university accounts? My experience has been that individuals are easy, departments are hard, and universities are next to impossible. Since your product is ad hoc, perhaps it's easier. You might want to think about how you could offer this as a SaaS.


Started using Prolific a few days. Love the fact that it has a clean and simple interface and it pays in British Pounds which then translate to higher CAD/USD amount. The only thing is that there are not enough survey and the site does sign me out sometime after showing a 500 error. Would also love to know if you guys plan to have a mobile app/mobile friendly site at some point. Started using Qmee app on my phone and love it's interface and level of engagement so far. I understand that Prolific and Qmee might be serving different markets but would be nice to see Prolific implement some of it. Would love to give additional feedback if you'd like.


We use Survey Monkey. This seems similar/identical. How do you compare to that? Is there a reason to pick you over them? They charge about $1/response.


The short answer is that with Prolific you can use any survey/experimental software you'd like. Behavioural researchers (our primary customers) don't tend to use SurveyMonkey, but instead use Qualtrics, Gorilla, and custom experimental software which have many features important for behavioural research (e.g. randomisation and reaction time recording).

In addition, SurveyMonkey audience uses a survey exchange provider CINT [1] and as a result you don't have any transparency of where exactly the participants are coming from, how they're vetted, the user experience they're having, and you can't communicate directly with participants or retarget them for longitudinal studies. All of which is important for research and are features that Prolific provide.

[1] https://www.cint.com/press-release/surveymonkey-audience-exp...


Could I send someone a link to my iOS app and have them record themselves using it for like 10 min? And get a little feedback from them? How much would that cost?


Yes, you could use Prolific for something like this. You would need to use our "opt-in" screeners and avoid collecting personal data as part of the study.

It would probably cost about $5-15 (it depends on how much you would pay the participant, I would recommend being generous for a study like this).

If you'd like to tell us a little more about your goals, we can get back to you with some more specific information: https://prolific2.typeform.com/to/aczPoI


Nice work, checking it out now as I've been looking for something like this :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: