1) Years ago I started a small side-business with a few colleagues. It was a web-based service for a very specific group of clients. Best of all: The idea had already been validated. We had our first paying customer before we even built the product. And we quickly found a few other customers. But after a few sign-ups things were not looking so good anymore. Even though our early adopters were well known in the industry, other companies would not follow. Everyone we talked to loved the product, but as we had to learn the hard way nobody was willing to pay for it. Some even went so far as to implement possibly illegal solutions instead of paying us a few bucks.
Lesson learned: Even if you talk to a few possible customers, question hard if those really are representative of your target audience. Also, liking a product/idea is not the same as being willing (or able) to pay for it.
2) A while back I wanted to find out if there would be any commercial interest in an app I had already built. So I set up a landing page and collected email addresses. Since I had already built the app I could easily hand out demo accounts for every interested party. I quickly collected around 60 addresses and set them all up with a demo account. Only a single person ever logged in at all! I sent out reminder emails but did not get a single response. To this day I do not fully understand what went wrong.
Lesson learned: Newsletter- or Beta-signups from your landing page is NOT the same as validation for your idea.
I was part of a startup a few years ago with some really cool technology and good market validation. Somehow we got connected to a very well known media and technology company and were brought in to sell our product to them. The founders, star struck, pulled out all the stops on our presentation, flying people cross country, buying up demo data, the works. They saw the potential install base * our license cost (even at discount) and were thinking immediate exit.
During the demo there were oohs and ahhs at all the appropriate places especially around certain features we really wanted to show off. Questions were constant and detailed and follow ups were planned before we even left the room. High fives were given in the parking lot.
Over the following weeks calls were vague, "hard to coordinate all these people" "he's on travel" "she's changed departments"
Weeks turned into months, calls just stopped being returned.
The deal was put into back burner status with lots of finger pointing.
One day, a year or so later, one of our employees came back from a tradeshow pointing at a flyer they had picked up from that company's booth. It described a new feature in their technology offering that was an exact copy of some of what we had shown in our demo. The copy even used a term we had coined for the demo. The screenshot looked more or less like that piece of our product, reskined for their technology.
Lawyers were consulted.
The end result was that they hadn't done anything wrong, just copied an "idea" they had seen. But as a result we were effectively locked out of their market segment. They had simply done the math, develop it in house or pay a license fee per use and it had been cheaper to do it all in house. By buying relevant data and making a demo so complete you could make actual decisions off of it, we had even shown them what they should aim for, what worked, what didn't work. We helped them optimize their development process and made the decision to do it all in house even easier.
In both these cases, I would have taken them as signs that I had some unarticulated, untested hypothesis in my head, and that I'd have to do user interviews to figure out what the issue was. People are weird, and business is extra-weird. Debugging a value chain is often way harder than debugging software.
1) You are not looking for somebody to "like" your idea. You're looking for something to build a machine around. That usually means "I'm ready to write the check"
2) You are not validating the market, you are validating the machine. As you point out, it's possible to run into a few folks that have checkbooks. Doesn't mean you can find any more of them. The "machine" part is "I go out on the street corner every day and run into 5 new people that will give me money" This is something that can scale. To build that, of course, you have to talk to those first 5 people.
For what it's worth, the web is a sucky place to try to get anything done. Everybody is on and they're all trying to automate everything. While the landing page/email thing is great, probably much more useful to physically interact with as many people as possible.
It's counterintuitive to some extent, but consider a two-phase signup process: a landing page plus an email validation of some kind, or perhaps a two-step signup process on the landing page. There will be a noticeable drop-off between Step 1 and Step 2, and that's good. Those are probably people who would have washed out anyway, and you no longer have to waste time catering to them as false positives. 
I'm not saying this was necessarily your issue, but a lot of people run into problems with hypothesis testing by conflating it with marketing or growth hacking. It's not. You're not trying to optimize a hypothesis test for pure signup volume. You're trying to use a hypothesis test as a filter: for the right audience, for your problem/solution statement, and for the value proposition you've chosen to test.
You don't want signup to be a total pain in the ass, but at the same time, you don't want it to be so easy that it loses meaning as a signal of intent.
 Caveat being that you still want to probe, and perhaps ask people who've bounced between 1 and 2 why they did so. Sometimes it really is just a UX issue. But in my experience, if someone really has the problem you're addressing, he or she will stick through a two-step signup or validation process.
A great thing to do is when you're validating products you ask the prospect how much they'll pay. Then say "Ok, so if I put a contract in front of you that says you'll pay $x if I deliver this product you will sign it?"
You'll find out very quickly whether someone is just being nice or whether they are serious.
Depending on context, you can also ask for a credit card number and then not charge it. People are very careful about giving out credit card info, so them giving it to you is a great sign that they're really serious.
As soon as someone starts trying to maneuver me to take money out of my wallet, I'm gone. It's not my problem that you don't know if your product is good or not. There is no way on earth I'm signing anything, giving you a credit card number, or what have you. I'm utterly shocked that this is considered a good way to gauge interest in a product.
Even if I really am interested, so what? I have a budget, my appetite exceeds my grasp, just because your product might solve a pain point doesn't mean I will buy it. I need to consider the needs of my entire business, weigh pros and cons, and so on. If you are pulling out papers to sign after a conversation, I'm so gone.
I'm reminded of some 3rd party home alarm installers that came around to my house a few months ago. We were thinking of re-activating the alarm in the house, so we talked to them. Didn't take long for the papers to come out, etc., and it didn't take much longer for us to tell them to pound sand.
I don't give emails out (because on average the spamming becomes relentless) to landing pages, now I'm supposed to give you a CC number just so you can do marketing research?
The point of asking people for money is exactly to separate the people who really are early adopters from those who are just enthusiastic.
A great example is Kickstarter. You may be unwilling to buy a product based on little more than a dream, but plenty of people are willing to take a gamble on something risky when they care enough about it. The rest of us wait until the product has come out, has been reviewed, and have friends using it.
Not trying to be snarky or anything, I know yours is a common opinion so I'm just wondering what would be a less offensive way to test the waters?
Perhaps - do the conversation, leave some contact details, give them the papers but just say, "look, no pressure, I'll leave this with you and let you think about it." Then see how many people get back to you?
They showed a video of what the card would do and how it would work. I actually have no idea if it will work, or does work, but it looks like it works in the video. It looks like they have something awesome. And it won't be $2000 to buy one.
They said, it should be ready within a year and you can reserve one now for 50% of the sale price (regular price $100).
So I bought one, even though I am aware that the product might never come to fruition and that my $50 might disappear. A lot of other people bought, too. I have gotten updates and I have no reason to believe I won't get my product and I'm excited that I got in on it.
What did I buy, really?
I saw a guy on a video using Coin, showing how it would work, selecting different options on his card, answering all my potential questions. Man, it sure looks sweet and I can't wait to have one...
did Coin really exist or did they just do a video to show how it could exist? Does it matter? I have pretty good video equipment and coding skills...and I've even hired actors for videos and done pretty well...what would stop me from hiring an actor, shooting a commercial for something like Coin, and then collecting thousands of dollars to fund the actual creation of the product?
Nothing. I just didn't think of it and didn't act on it, and they did. I'm not saying that's what they did at all, I'm just saying that could be done.
That's market validation. They sold thousands of Coin based on their idea. I have no idea if they even had working code or product, or if they were just validating whether it would sell.
but that's evidence that you're probably not a profitable customer to chase (which again, is fine)
if you're not taking out your wallet, you're a waste of the salesmans time, the earlier they can figure this out the better
I'm certainly unreliable that way. There are a number of Kickstarters where I've said, "OMG I'd pay for that!" But if I come back to it a couple of days later, I'll never actually click on the signup button. I'm even worse with things like this in person. I don't like disappointing people, so I'll try hard to find the most positive thing to say about their product.
When interviewing, I might ask people a question like, "What's a fair price for X?" But that for me would be more about the follow-up question: "How did you pick that number?" An insight into what they see as equivalent products, related value propositions, or personal value metrics would definitely help me think about pricing.
As for pricing, I've found that asking people "What do you think a fair and reasonable price for X would be?," followed by the follow-up as to why, is the best way to ask about pricing (in a qualitative setting, at least).
I think Blank is entirely right. But you have to be tricky.
At my last company we did user test every Tuesday afternoon. When seeking subjects, we looked like a research firm, not a product company. When they arrived, there was no company name on the door. The interviewer would start out asking about a variety of products to get a baseline; ours would just be one of several discussed. Eventually, because we spent more time on our stuff, maybe half the people figured out that one product was ours, but by then we generally had enough honest opinion.
The point in this context is that, while it may still be unreliable, it would be more reliable than simply asking if they like your product. And it is the best you can do, since no one is going to actually pay for a product that doesn't exist yet.
And, as mentioned elsewhere, people can think they are about to pay for a nonexistent product. Once you have proven willingness, there are a variety of ways you can finesse the issue of not actually having a product.
Make them put their money where their mouth is.
With #1 you found a problem, but failed to find a market. It's hard to say without knowing the product and/or situation, but it sounds like your product was in the unfortunate position of being an enterprise B2B product that was not solving a problem in the SMB space. The typical strategy for a B2B product is to sell it to SMBs early on, then use that revenue to fund the massive feature set that large enterprises usually need. This is one of the things that happens in startups -- sometimes you don't know your market is a dud until after you have a product. It's not a sure thing.
With #2, you just didn't have a large enough sample set. Response rates for e-mails collected via landing pages like this are around 1-2% in my experience. It's a bit of a chicken and egg problem to be honest, and it's hard enough that many startups will throw a marketing budget into it.
Sure, Beta and Newsletter signups are not full validation for your ideas. But the reality is that you'll never get full validation.
For those who are looking to take the plunge and quit their job, or are deciding which project to work on, a simple test like this can be used as a proxy for demand.
Just because people click sign up on your landing page doesn't mean your product will succeed. But it is infinitely more indicative than just guessing.
If you're going to do a startup, you're going to shoot and miss several times before you succeed. I've been there before. But having Beta signups at least shows you're aiming in the right direction.
The importance of this comment lies with the false reality of believing at face value that someone loves your product and says they'd use it. Until their money makes it into your bank account, take everything they say with a grain of salt.
It's also worth noting that changing the habits of individuals is a massive feat of social engineering. I've seen products that save time and money and yet the market will stick with what they've always done.
Building a product is an art. There way too many things one should consider based on his experience and intelligence, not everything can be quantified and calculated (well, it can be, but still with assumptions, and once you assume...). As one of my favorite photographers says, "There are no rules for good photographs, there are just good photographs". Same can be said when building a product..
Suppose that you make something that you're convinced is the best product in the world. You work for years, you spend millions. And then you launch. It turns out that nobody cares; you go bankrupt. Someone might ask: "Was it really a great product?" We could argue the answer to that, but I think it's the wrong question.
Businesses need to be sustainable. Some ideas work, some don't. I think the best thing we can do is to a) maximize our chances of success, and b) minimize the cost of our failures. I believe the only way to do that is to continuously and aggressively test our hypotheses. Otherwise, investment just ends up being the way we fool ourselves a little longer.
Products aren't art. They're commerce. If you want to do art, just do art.
Sure, validation is not a silver bullet. However, lots of startups make it or break due to luck and no-one(yet) can't eliminate randomness in a stochastic(debatable) system like technology entrepreneurship ecosystem. Validation helps to decrease the chance of failing, though. The famous quote by George Box reads "Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful."
Also validation is not listening to what people tell, it's also observing actions of users when they interact with the MVP.
Yeah, I think he's misinterpreting what his photographer friend means. There are TONS of rules for good photographs, everything from technical ("sufficient lighting") all the way up to art theory ("rule of thirds"). The guy's point was probably that any subject, at any time, can possibly be photographed in a way the conforms to the requirements of an artistically pleasing image. Trying to fit that notion to the idea that entrepreneurs should just leap for the stars without checking to see if their figurative lens cap is off is not really very sound.
I tend to agree with you to some extent, even about the quote for the comment. But, startups as an example fails to me, because as we all know, most of the startups fail anyway.
p.s. there are rules for good photographs but following the rules can only put you on bar with other people. Learns the rules so that you can break them. Learn to validate so that you know when NOT to validate.
With that in mind, one should break the rules only if they have a high degree of confidence in their ability to discover a model that is better than standard model or understanding that the entire modelling approach is tainted. However, Dunning-Kruger effect hypothesis states that less competent are irrationally sure of themselves and vice versa. Are we in a situation where most actors on the market are trying to break the rules to their own detriment?
There are also lots and lots (and more lots) of bad photographs. There might not be specific rules for taking really stand-out pictures, but there sure are good rules for what not to do.
Some people instinctively understand them. Some people have spent years painstakingly learning them, and now choose to break them from time to time.
But there are plenty of rules, and if you're not already a genius or an expert, learning and following those rules is a reliable way to improve your photography.
-> It still not easy, people were used to candles at that time. Edison did not study the candle making process or the problems of people with candles. His genius was in putting together the solution.
Apart from this, the article is nice indeed.
^This, and this http://www.helpscout.net/blog/why-steve-jobs-never-listened-...
I run a football pool at my office, and I was manually creating each week's list of games. So I scratched my own itch and automated this as much as possible (some manual work). I've made the results available on footballpickempool.net. It simply offers weekly pool sheets for the full season, and results will also be posted. I know many people could find this useful, but not necessarily a paying audience for such a simple offering.
The validation idea I'd seen before was to offer a link to a "feature," and see how many people click on it. So I have a "Start a League" link. The page let's them know that this feature is not available yet. And I also have a Google form on the page asking if anyone would pay a small amount for this feature.
Hopefully I see some results as the upcoming season starts.
Even if they did validate early on what made them successful often changed drastically based on what was known when the company was started.
In otherwords, what percentage of the time does this payoff?
1) Don't delude yourself into thinking you're "changing" the world (for some ridiculous definition of changing). It's most likely that you aren't. Focus on the business fundamentals.
There's an awful lot of startups making very ho-hum products -- small incremental improvements to existing things or dead end paths that nobody would ever possibly want. They've convinced themselves and their VCs that if only the entire world starts using their chat app or appointment reminder, that world peace would break out and we'd live in a Bill and Ted future where Rock music unites us all. They then build a business model that actually requires some percentage of humanity to get on board with their app to become profitable.
They also forget that to the VCs, they don't care about changing the world, the company is the product and in the end the company will be sold for parts to the highest bidder. The business needs to optimize for this case because this is what the end-game looks like. Not a neon colored future where everybody is playing air guitar and peace and tranquility because they used your app.
2) If you're bringing lots of technology into the startup, it's not necessarily an advantage. Technological debt can absolutely cripple you. And by the time you've realized it, you're completely out of money and left with old technology that nobody wants.
I know of a more than a few startups that get kicked off with a million lines of existing software. They think it's the matter of some source clean up and some new paint on the front end and it'll be easy street. Inevitably, they spend most of their VC investment bringing this legacy to market and getting customer #1. Feedback from early customers is positive, but with one caveat, "this software needs to do x, where x requires a fundamental rewrite of the entire product. By the time a decision is made to start a ground-up rewrite to support x the company is a couple years in and has lost all of the momentum a startup needs when coming out of stealth mode.
rewrite = reboot of the company.
You accomplish the rewrite, pitching it as the next version, but shifting entire technology stacks doesn't feel like a version bump to the customers (it's a huge investment for them as well). All the sales activity you've done over the last year is also junk, unless you feel like supporting the old and new versions simultaneously, and you probably don't have the resources to do that well.
By the end, you're out of VC, coasting on what your sales team is pulling in, but not moving revenue in enough of a positive direction to exit or raise another round. You try to sell it on the cheap and recoup the VC, but buyers are wary of the now very old technological debt you've accumulated and the small customer base you're bringing to the deal. Along the way you pick up some very onerous, non-friendly software licenses for some of your components -- %of gross revenue or something even more onerous like GPLv3 which makes your entire technological investment worthless. Buyers doing due diligence decide not to buy. Your company is a zombie at this point.
A layman's approach will almost certainly be evaluated on a subjective/emotional basis. i.e. "We are doing something great!" "Everyone is happy." "I have so many customers" "Funding is in!" "I got a great review on tech crunch three months ago"
A more scientific approach is to first define the question in such a way that it can be measured.
So let's say a business that does well does three things:
Does not lose money,
Has happy customers(low support calls/emails, high reviews,product is used),
Has happy employees (low turnover, gets quality work done (measurable by issue tracking)
Now you can say: Does my business meet the requirements for a successful business. If no, write down those numbers. That is now your control and baseline.
It is time to experiment. It's best not to change more than one thing at a time in science so you know exactly what the effect is vs the control.
Lets say you realize you're losing money and your customers aren't happy. You do some research and come to believe (not know) that it's because the login screen is incredibly awkward. Google analytics shows that the bounce rate for the front page is huge so there's some data to back this up. It's time to form your first hypothesis.
My business isn't doing well because the login page is so incredibly awkward. If I fix the login page the business will do better.
Remember that both the "awkward" and "do better" while subjective declarations have actual metrics to back them up: bounce rate on analytics and the previously defined metrics for a good business.
Now you do the experiment defined earlier: fix the login page.
Now you collect data. If your bounce rate goes down, your customers are happier and start paying the monthly fee. Then you have solved your problem and you know with a very high degree of certainty how you have solved your problem.
Now let's say you looked at the data at the end of this experiment and noticed a decrease from the control you set a while back. Well, obviously that was not the problem (or at least not 100% the problem, part of science is knowing how certain to be about things) Maybe next you can define the problem as: the site is buggy, issues aren't resolved quickly. If I observe my employees I can see that they spend a lot of time staring at the 98inch projector screen I set up in my 'cool office' instead of coding. Now you can do a trial run of turning off the projector on some days vs others and seeing if the work goes up, stays the same, or goes down. Or you can discuss the problem with your employees and see if they have input. Then define the experiment using that; just don't forget to actually measure something objective at the end of the day.
This way of thinking is invaluable. You can drill down exactly to the problem core. Waste less resources. Argue less. Most importantly you can actually sit down at the end of the day and prove that you have solved a problem with actual data that cannot be easily disputed, and if you've really done it right the experiment can be repeated and have the same result each time.
Is it the way of thinking or the things you assumed the business already had that were invaluable? You assumed
* The existence of a large enough steady stream of paying customers to quickly gather statistics on. "Steady" is especially atypical in many environments.
* Enough runway that you can afford to tackle issues one-by-one (i.e. using a control) rather than scrambling and guessing at what is going to work. If users come in "bursts" due to marketing campaigns or whatever then this issue is compounded because each burst can cost a significant fraction of your runway.
* The business is at steady state (e.g. you aren't planning a "grab the market" phase that runs at a loss and then a "make money" phase that leverages the brand recognition, network effects, etc that you've built)
The article linked above already goes into some detail as to how to use this to simply evaluate an idea. I was just expounding on another possible use just to show how invaluable this is. Heck you can even use it in relationships. You can tell if someone is lying. You can use it to tell if people like you. You can even use it to lose weight. I'm using it right now to see if I need to drink more water daily or if something else is wrong. It's basically a super power.