Pioneer could do a much better job of presenting the reality : experts are involved in picking the winners and the points earned in the tournament are also taken into account.
Another point I'd like to bring up : the home page says winners get $7000, but the offer page mentions that it's $1000 cash and $6000 worth of shitcoi...sorry, stellar lumens. Maybe there's a pattern here? Misleading marketing material that we typically expect from tone deaf big corps, does not inspire confidence in the team behind pioneer.
Another point : for some reason they see the need to display the age and country of the participant. Like, if you know anything about hackers, it's that they greatly value privacy. That's pretty much the only reason that kept me from participating when it launched and now I have zero interest even if personal details were not displayed publicly.
$5,000 of the $6,000 in lumens are also locked up for 2 years. The foundation held back ~80% of all lumens for marketing giveaways, which I assume is where they're coming from.
> $6,000 in Stellar lumens, including $1,000 completely unlocked and $5,000 locked up for two years.
Pioneer founder here. As noted by almost all commenters in this thread, we do rely on expert voting. It's in our FAQ. We take pride in that! I believe mixture of moderation _and_ quantitative metrics are the correct path towards accomplishing our goal.
We've all seen what happens when you let the algorithms run free.
We've got much to fix in terms of making the game more _fun_. Point systems are best when you _understand_ the underlying mechanism. There's a tension between that and moderation. Losing should feel educational, not frustrating. We've got some ideas on how to fix this, and we're burning the midnight oil implementing them.
(BTW, we're very open to any product suggestions here! In case you're wondering what the ruckus is about: the Pioneer idea is to make an online game that rewards productive, creative behavior. Which beckons the question: how do you quantify productivity in different domains?)
When one component of the system has infinite power over the other (moderation over quantitative metrics here), you can't call that a mixture IMO.
I would suggest these three things:
(1) separate the concept of "experts" and "organizers/developers". When I hear "experts" I assume "independent experts".
(2) change deceiving language on the landing page. Quoting: "All you need to do is convince other participants that your project is worth doing." and "Every week other participants will give you feedback and points. The more progress you make, the higher your score will be." No mention of experts, yet alone organizers at all.
(3) Make the points transparent. That includes breaking down voting vs experts vs whatever and disclosing the details of the scoring and matchmaking systems. Doesn't have to be exact spec/formula obviously, but general details.
Well, I wrote to you (Pioneer team) multiple times over this week. I actually sent all the materials for this article yesterday.
Edit: re-read your comment more carefully. To answer: because the exchange was not productive. I only saw general comments equating to "you don't understand our opaque system" without even acknowledging the arguments I was making.
To add to my previous comment. What separates "expert voting" from "leaderboard manipulation" the most in my opinion is the access to the leaderboard state. If experts vote in isolation, based solely on the project info and reports it's a "mixture of signals". If they have unlimited power, and change the scores to shape the leaderboard to a desirable state, it's manipulation.
I agree that pioneer is super sketchy about it's exper scoring system. I think in practice it's more like peers float your app to the top of the applicant pool and then experts don't bother themselves with the ones ranked lower, so it's not as off the peers are useless. At least thats what I figured when I applied last year, and gave up when I realized that the experts were not really selecting "outsiders"
Protip : never try to hide your shortcomings. Embrace criticsm and scrutiny. If you do it right, these instances should only help you become a better team and build better products. How you respond to scrutiny pretty much gives an idea of how you handle yourself when the going gets tough, and comments such as these just don't inspire confidence.
Do experts take into account previous performance and overcompensate current score to ensure previous top players are pushed out the top 10?
Data seems to suggest this, which is going to extreme lengths beyond “expert votes”.
And is more akin to changing the rules at the last minute to avoid living up to the promises put out by the platform.
If this is not the case then perhaps you can release more data to illuminate the oddities highlighted by the article?
I started entering a Pioneer project because I liked the idea that many projects start as "small ideas" and some of those ideas may have more potential if the creators receive the right type of positive feedback from others early in the process.
Also, it could help, in theory, to be forced to provide progress reports on a project, as Pioneer encourages. Also, Pioneer got a lot things right with their UI for participants.
However, it seemed like most successful projects on Pioneer are pre-existing projects that are shimmied into the Pioneer format: Often they're projects that already have a year of work behind them with significant existing funding that are dressed up as a 1 month project. Or, often they are projects that are completely impossible to evaluate within the Pioneer model (such as a new car suspension system or something, that an online reviewer can't possibly give feedback on without an in-person meeting to observe the device) And yeah, the algorithm of the point system is completely impenetrable to the participants.
I think it would be great if someone created something like Pioneer but with a more tractable ethos, supporting projects that truly are only 1 month old since inception and involve only projects on which progress can be directly observed online by community evaluations.
I'd also like to take a moment to chime in here. I competed in Pioneer for the first three monthly tournaments, ranking high (and being selected as a finalist) but never was selected as a winner.
From the beginning, it has been apparent that expert voting plays a significant role. I'm not sure if they're still describing the Tournament like this, but I remember an early descriptor of Pioneer as being a system to find great talent, as a way for people to get the visibility. If that's the case, it serves its goal.
I definitely have seen problems with just letting the winners be determined solely off of score. If Pioneers were simply selected by the algorithm, it wouldn't result in a good experience, since what other applicants find important about other applicants can differ than what the actual tournament organizers and experts find important.
When I participated though, I certainly had some of the same concerns the OP presents about transparency. When we stopped participating in the tournament, I reached out to the Pioneer team via email to share some of those concerns, and they were very receptive. They took the time to reply to everything I wrote, point by point, and acknowledged their shortcomings. What Daniel (the founder) posted in this thread is true- they are working hard to try and improve the system continuously.
I've gone back and forth on my feelings about Pioneer, but I have come to truly believe that they have a really excellent team that cares a lot about their mission and their participants.
Disclaimer: I was a winner in the August ‘18 batch.
Pioneer has been changing the meetings rules and game each month, but since the beginning, the advisors have always had a influential role in determining the winners. The month I Was in the winning batch, I was never in the top 10 rank, but was pushed over the edge in the final evaluation period.
My understanding is that the winners are decided by a combined factor of perceived progress, points, and weighted evaluations by past winners and advisors.
The reason for this is that not all projects can be compared equally. Someone with a funded start up who has a team of 5 should be equally compared to a single founder starting a project in a developing country.
Overall, from the winners I have seen, there do seem to be emphasis on projects that are socially oriented or geographically diverse. One goal that has been expressed multiple times by the Pioneer founders is to create a global campus. Rather than allowing campus driven knowledge and social interactions to be limited to people who have access to coastal cities or urban mega cities, Pioneer provides an opportunity for access to the same social lift, but without the limitations of location.
EDIT: Changed "Ivy League" campus to global campus.
Ok so then the purpose of Pioneer funding isn't to make a product that will actually do something, but just to have cute marketing and be located somewhere cool?
If that's their goal, then just go and make an ivy league global campus. Why run this circus?
>If that's their goal, then just go and make an ivy league global campus. Why run this circus?
The cynical answer being because this way they get more participants which makes them look more impressive as a platform. Of course those participants have no chance of winning but it still makes for nicer numbers.
This reminds me of Product Hunt in its heyday, which turned a blind eye to voting manipulation, especially by VCs who were helping to push/promote a submitted product without any disclosure.
Of course, that's why it got VC funding in the first place.
It didn't seem like they were turning a blind eye, they seemed to be very much biasing towards VCs/certain makers, in particular by controlling who could submit things. They ended up with a leader board similar to Digg's in its heyday, where a few submitters controlled the outcome of the top entries. Unlike Digg, since this seemed to be from the top down, I think it actually helped Product Hunt rather than hurt it. Their curation seemed well done, at least to someone like me from the outside.
The final step in the tournament has our experts providing a final review on the top applications, based on leaderboard rankings. After their votes have been applied to the leaderboard, we select the top-scoring players as Pioneers. We're experimenting with the number of winners per-week, so cohorts will vary in size.
So, yes, the leaderboard is manually updated following the "experts" recommandations. It's in the FAQ.
I don't see anything like that. In that last image, in the same column "not in top 10" there's the project ranked #3 on the image above, and the project ranked #17.
In this image, two projects mention being upvoted by an "expert", and the two are in the top 10 indeed.
Overall, I find that the evidence fails to convey the point, I may be wrong and the case is strong but right now, I don't think it's convincing that anything shady is happening.
I'm not surprised that the ultimate choice of giving out money is closely guarded, because the very little amount of voting happening makes it very sensible to fraud.
(When a photo of an egg can get 20 million likes on twitter, basing a contest on handfuls of votes would seem a bit...naive.)
That is not necessarily a bad thing. If the experts were highly correlated with what the rankings already were, there would be no point in using them. You want as many independent inputs as possible; the more correlated they are, the more redundant they are.
What I tried to say is: you can't explain the points manipulation with "experts", because "experts" changed little in that regard, as can be seen on that screenshot.
Disclaimer: I'm the founder of one of the winning projects you posted on your screenshot.
We've played every tournament since the second tournament, so I'm in a good position to speak: There have been times in previous tournaments when we were placing top 10 and after a round of peer voting (that means, no experts voting) we went down to top 40.
Also, there were times when we got more expert votes than this time and didn't win. This time we got fewer expert votes but placed first.
Maybe it's just one example and it's not statistically significant; but I just wanna say that crowdsourced voting can make the scoreboard volatile, without any form of manipulation.
I also have to say that every time they bashed us, we took it as constructive feedback and acted upon that in a really positive way. Now we have a startup with better product-market-fit.
Same feelings here. I've only been doing it since it changed to being indefinite, not month to month. I've been making steady climbs, though. I was up to 42 on the leaderboard and after voting dropped down to 95. It's only motivated me to do better.
They do mention that part of their algorithm is if you're matched against someone higher and get upvoted over them, you'll receive more points. So if you're the higher rank and getting downvoted compared to someone lower, you probably lose more points.
On top of that, they are giving away money. Why shouldn't they get to choose who they give it to? Sure they could be a little more transparent if they are really manipulating that much.
You should treat them more as marketing, and that works if you already have the time and resources to go on this marketing circuit
Doing these to get the initial seed is a waste of time
Biggest shock of coming to silicon valley for me was the industry of silicon valley. Poor programmers, idea guys with no network or money or talent (just like everywhere else), hackathons where powerpoint presentations with mockups win, event hosts making a killing, all operating in parallel to the real action which is still only accessible by your network.
Hmmm... they also use pretty much the same logo as the company I currently work for (https://www.enterpay.fi/in-english/, the english website is not very pretty, but you can see the logo there). We have a trademark in EU and I once sent an email notifying them about this potential conflict, but never got any reply. Not that it really matters, but this company really fails to make a good impression.
I'm currently ranked number 24 on the leaderboard. I was in the top 10 until they started picking winners. You can see from the chart in the article that I (DaveJ) have the largest negative "rank change since start" with -21. My feedback emails on my progress have been very positive.
It's all good though. I understand that they are looking for pioneering ideas (hard sciences, research projects, AI, etc.). Perhaps B2B SaaS is a little less pioneering. I'm definitely going to stick with Pioneer though. As a bootstrapped solo-founder, having a weekly progress report to submit to my peers helps keep me focused and motivated. If I win a prize then that would be amazing, and if not then my startup will still benefit from my consistent progress :-).
edit: By the way, I'm working on a self-service tool for converting web apps to desktop apps: https://www.todesktop.com/
thanks i’ll check out todesktop. i’ve recently been trying to get away from chrome and lack of profile support on safari is a blocker. the existing webkit encapsulators are awful in different ways.
VCs aren't going to invest in a product because some top 10 list on a website says so. They probably consider all the projects individually and choose the ones that are the most likely to succeed.
Did you sign a contract with any guarantees? Read the legalize? I bet you it's all in there.
Well, the point is that what they claim on their website is very deceiving, especially for people outside of SV. I believed them, and I bet other people did. The post aims to explain how it really works.
This seems like a reasonable way to run a tournament/contest like this. I don’t think you’d get good results if it was based solely on voting it something else that could be easily gamed. Almost all your winners would just be teams good at gaming!
But I say all that not knowing how the tournament was presented to the participants. I’d be pretty mad if I thought they asked me to do one thing and worked really hard to achieve that and it turned out they wanted something else all along.
I won in December and while it's true that there is score "manipulation" (we've placed pretty high and didn't win. We won at our lowest ranking), I don't think this is a bad thing.
Pure voting cannot (right now, it'll probably change in the future IMO), account for all the factors that make a good project.
The Pioneer team and expert voters should have higher than normal voting power and I believe that that is what you are seeing.
> The Pioneer team and expert voters should have higher than normal voting power
If that's the argument, then votes from members with higher influence should be public, as from the user perspective it's indistinguishable from manipulation.
Yeah, I think someone should develop a replacement for Pioneer that mostly does away with the prizes and VC enticements & instead just focuses on peer feedback and the progress update system.
Unfortunately I already have too much work remaining on my existing projects to take this on, but maybe in a year or two if no one else has done it by then.
I've noticed a lot of articles linking to medium.com lately. Is there a special way that you are all reading these (through the paywall) or do you all have subscriptions?
Can I please complain a little? Is hackernews the small local news source for the bay area bubble? I don't know what pioneer is or which scores are being manipulated or what happens to winners (or losers). I assumed the electronics company known for its amplifiers etc. was up to some dodgy shenanigans. Zero context from the title. I don't mind clicking stuff to get more context but I think some of it should be there without assuming everyone is in the loop at all time.
Based on your previous title of "Does Silicon Valley VC tournament manipulate the scores to pick winners?", I searched google for "pioneer vc"[0]. The top links were for https://www.pioneerfund.vc -- which is the wrong url. It was confusing because that landing page also mentions YC.
Another point I'd like to bring up : the home page says winners get $7000, but the offer page mentions that it's $1000 cash and $6000 worth of shitcoi...sorry, stellar lumens. Maybe there's a pattern here? Misleading marketing material that we typically expect from tone deaf big corps, does not inspire confidence in the team behind pioneer.
Another point : for some reason they see the need to display the age and country of the participant. Like, if you know anything about hackers, it's that they greatly value privacy. That's pretty much the only reason that kept me from participating when it launched and now I have zero interest even if personal details were not displayed publicly.