

How I Used Amazon’s Mechanical Turk to Validate my Startup Idea - ma2rten
http://harperlindsey.com/2010/09/01/how-i-used-amazons-mechanical-turk-to-validate-my-startup-idea/

======
fezzl
Validation is not validation unless money and pricing is brought up. I don't
think she mentioned pricing anywhere in her surveys. I would "use" anything.
Paying for something, on the other hand, would have put me in a completely
different state of mind.

~~~
evgeny0
I'd go further: many people will happily agree to buy something for $x when
they think it's all hypothetical, but would never actually buy it. Hence the
oft-repeated advice "don't ask people if they would buy, ask them to buy".

~~~
fezzl
The only real validation is money in the bank... from 10 different people at
least (since we're validating a business after all).

------
espadagroup
I use mturk a lot at the startup I work for and while I think it is an
amazingly powerful tool, one thing to keep in mind for experiments like yours
is the number of actual unique worker Ids.

For example those 200 responses, I hope you really mean you Looked at the
actual number of unique people who accepted hits and thus you actually
published a (large) multiple of that 200.

In my experience if you send out 1000 hits, about 70 people will all of them.

There's no way to easily filter so that a worker can only do one hit without
some API/iframe trickery.

Just some food for thought.

~~~
StavrosK
What about the "Number of assignments per HIT" field? It says "How many unique
workers do you want to work on each HIT?", it sounds to be what is needed
here...

~~~
StavrosK
I just tried it and the workers were, indeed, unique. However, it seems that
the feedback was somewhat of low quality, as, for the "would you use it?"
question, almost everyone (understandably) said yes (why not, really?). Those
who answered no cited privacy concerns, but nothing integral.

For the "why would you use it?" question, they mostly repeated the use case I
stated above. This was only 10 people, as I wanted to test it out first, but
there's my data point for you.

------
ashish_0x90
FYI, It has been posted before - <http://news.ycombinator.com/item?id=1668588>

------
aufreak3
How biased is the survey likely to be? Now, it is hard to take a few hundred
sample size and be unbiased by any measure. Given that, I wonder what specific
kind of bias does posting a survey on mturk throws up, since the survey takers
are actually "working".

------
edge17
I'm not sure what the writer meant by segmentation, but for what it's worth,
based on what I've seen mturk workers aren't really equally distributed across
the world. From my own observation, a large majority of the workers are in
India.

------
mmastrac
This is way better than polling based on false pretenses as suggested in
previous submissions. That is a great way to solicit feedback. Thanks for the
submission.

~~~
atomical
Could you explain what you mean by this? The writer of the blog didn't publish
her questionnaire or responses so there's a question in my mind of how
effective it was.

~~~
StavrosK
She _did_ publish her questionnaire, it's right on the post.

~~~
atomical
Where? All I see is this: "gave a brief description of what the
website/service offering would be then." It seems like a key part of the
survey is missing along with detailed responses.

~~~
mmastrac
The survey was (according to the post):

* Gender

* Age

* Would they use the service as described above

* Give me 3 examples of how they would use the service.

* General feedback or ideas on the service, why they would or would not use it.

