I think the second biggest lie is the opposite of the first one - that people don't care. My side projects are in game dev, and sometimes it feels like people are actively trying to ignore your stuff. The truth is, there's so many amazing, next-level games out there that being just great, isn't enough. I used to price my games at $1, because I was desperate to be heard. But the fact is, this isn't the real cost for most people. For most people, the real cost is oppurtunity cost. Yeah, I could play your game, but I'd rather spend my time playing the game that's 10x better.
A blog ? A "sneeze page" ? What's the point if you don't have anything interesting to write ?
A/B testing landing page and newsletter ? Is that difficult to get honest feedback from people who care ?
The rest is just "industry hacking".
Individual feedback doesn't replace large-scale A/B testing. If you're getting feedback from people who care (possibly implying they know you personally), it's also possible that they could deliver biased or unrepresentative feedback.
They just shouldn't be too confident they've made the correct choice.
You will just be acting under reduced certainty, but if you have to act, any information is better than no information.
(I'd be very interested to hear your statistical explanation).
I'm not a statistician, but I've run a lot of b-tests.
If you are starting from a neutral position, considering two possible alternatives with neither presumed to be more favourable than the other, then any statistical test based on using one outcome as null and the other as alternative hypothesis is fundamentally inappropriate. Any such test inherently favours one outcome over the other, rather than starting from a neutral position.
As closed is trying to explain, if you really do start from neutral then even a tiny number of data points is still better than no data at all. You shouldn't have too much confidence in whether you're really making the right decision, but if you have to make a decision, you are still more likely to make the right one if you go with what the data tells you, even if it's only telling you by a very small margin.
The way I see it, you need to prove that A is better than B by a sufficient margin to be distinguishable from pure noise.
So, imagine you put up a landing page with 2 variants. Each one gets 500 visitors. You have a conversion on one, but not the other. It's your suggestion here that there is some significance to that single conversion?
I think the problem is, you have no idea if that user would've converted had she landed on the opposite variant. That is, you can't disprove the idea that your test makes no impact at all.
And in that situation, yes, if you run both versions with randomised visitors and you observe a small but non-zero sample where one converted and the other did not, that is evidence that one version may be better than the other. It's not particularly strong evidence, but it is a non-zero amount of evidence in one direction over the other, and that's better than the nothing at all that you had to separate the cases to start with.
Therefore, if you must make a choice about whether to adopt one version or the other at that stage, then in the absence of any better evidence, it is more likely that the version that has converted performs better than the version that has not and logically you should adopt the one that converted.
Of course in reality you would probably prefer to collect stronger evidence before making a decision if that is possible. But if it's not then, as closed wrote before, any information is better than no information at all.
I think you and the other guy want that single conversion to be evidence, but in reality, it's statistical noise.
A coin flip assigned that user to that variant. If they were going to convert anyway, you will be deriving meaning from pure coin flip chance, and you have no way of knowing with a single conversion whether this is true.
Again, it's not about going in with an assumption of which is better, it's about realizing that in split testing the biggest challenge is disproving the null hypothesis.
It is evidence, just like any other properly collected data point. It's just very weak evidence, is what we're saying.
Of course in real world situations there may be a lot of variance and the correct answer may well turn out to be the other one. But in the absence of additional information, that is true for literally any number of samples that is less than whatever proportion of the population would give you absolute proof that your chosen answer is correct. If you have 50%-1 samples and every single one went with option A, you're still wrong if the other 50%+1 would have gone for option B.
What you're calling "noise" is an ill-defined concept. Qualitatively there is no difference for a result in a two-way test between a single sample and 50%-1. You still don't know for sure which answer is the right one. However, you're going to be much more confident about having the right answer in the latter case, which is what I think closed was trying to explain to you.
But if you're running a test with null and alternative hypotheses, you are going in with an a priori preference for one outcome over the other. You are literally saying that if the result is close enough, you will prefer not to reject the null hypothesis, and therefore whichever variation you have arbitrarily chosen to be your null hypothesis will be the answer.
That is self-evidently not a neutral assessment of option A vs. option B, and therefore there will be some cases where your test is more likely than not to make the wrong decision. In short, you are using an inappropriate test for the situation that closed was describing.
>> You are literally saying that if the result is close enough, you will prefer not to reject the null hypothesis, and therefore whichever variation you have arbitrarily chosen to be your null hypothesis will be the answer.
This is a misunderstanding. The null hypothesis is that your two variants have no statistical impact on conversion and any edge you see is just random. That is the hurdle you have to overcome to gain any useful direction from B testing.
In any case, we seem to be talking at cross-purposes here, so perhaps we'll have to agree to disagree on this one.
The key to understanding this situation statistically is by reframing the way you think about tests away from an all-or-nothing NHST, and toward either confidence intervals, or bayesian estimation.
That is, some kind of measure of (loosely) uncertainty around a parameter (or entire model) of interest.
Is the available data more useful than a coin-flip, which would be the alternative method of making a decision.
On the other hand, a coin-flip is probably the better tool. If you can't generate enough data for a statistical sample, then you're probably wasting your time creating an alternative version and setting up an A/B test.
Half of this is common sense, 'make an about page, make a contact page', that is basic...
Then there's a ton of stuff here that is hypothetically cool to do but practically speaking will not be productive.
As a marketer I can tell you that there is some pareto optimization that can be done here. It's likely, I think, if you did this whole checklist that you'd find 20% of your efforts ended up bringing in 80% of your conversions. The trick is finding what that chunk of extremely lucrative marketing activities are for your product/industry.
There's definitely some common sense stuff here, but for devs who are marketing their first side project, it might be helpful to have more rather than less.
Finally, I welcome PR's on the project! It's open source and I'm looking for collaborators to help improve it. I'm a dev, not a marketer, so I'd love an industry pro to improve it.
But akin to what others have said, it's not a checklist! I think I'd need 2 additional full-time staff on my side project working on nothing but this to implement it.
To make it more specific to side projects I think it could use a preliminary strategizing paragraph. E.g., if you only have 2 hours / week to devote to marketing, here's how to use this list...
That said, there is still value in seeing many potential channels laid out in one place, so that you can consider all of them, weigh them against each other, and potentially try out a bunch before zeroing in on the best ones.
The tools and implementation choices are less important IMO, for some things you could event start with a google drive doc if that makes you move forward faster.
 Here a playlist from Google on how to implement AARRR in Firebase
* Use the results of your research to make sure you are building something that customers will actually want/need enough to pay for.
Something like "Attend meetups or conferences for your target market" can be extremely dangerous because this is a great way to spend a lot of time accomplishing nothing just to tick something off the list. Whereas something like "cold calling 20 customers" is so important it should be in bold.
I would recommend this to be a list of marketing ideas depending on your phase, rather than an actual checklist of things to do.
I mean for example if your project is some plugin that makes pretty graphs. What you can do is email strangers screenshots between your plugin's graphs and the graphs they're using now. No pressure to buy, just a cold email suggesting it could help them.
That is just going to be way more effective than putting up some ads on Instagram.
I guess what I was trying to say is that if faced with something in checklist format, you tend to do the easiest things first, instead of the scary ones with the most impact.
Also cold outreach doesn't have to be over the phone. Email, Twitter, FB Messenger, LinkedIn mail, etc. could work better.
I have launched hundreds of products for myself and clients, and followed a formula for each one, and I believe it had about as good a chance of a successful launch as having a monkey throw darts at a wall. You always end up analyzing what you did when your launch succeeded, and your resulting formula is just the Texas Sharpshooter Fallacy with checkboxes.
The only real thing that truly helps a launch is having an audience already. It might not make the product succeed, but it will at least aid in getting the initial signups and feedback.
I have also launched "free" products to no audience of my own, and instead found them in forums. After posting about the product, people typically would sign up in droves. And once I had a critical mass of sign ups, I then had an audience to market to. That would then lead me to selling a bigger version that we were giving away for free . I call this piggybacking.
There are a lot of "hacks" you can do to help your launch along, but I don't believe any of them are better than having a baked in audience already .
A lot of these types of tips are almost cargo cult type checklists.
OP is thorough though and fairly comprehensive - it is interesting to read your thoughts on this too as someone who has used these techniques.
But as I have found in consulting many "side-project" developers is they want a condensed list of at most 5 things they can do to guarantee success.
Since on top of building the product themselves, they are also the ones having to build the email list, build the marketing campaign, etc. Like grandparent, they are looking for a faster list to check off.
So my fast list is:
[ ] Piggyback on an existing audience.
Making the list was easy. Prioritizing it and executing are what makes or breaks your project.
Lol, really? I don't see the benefit of doing that. It's way faster to read text than to watch a video.
It's easy to get excited about our side project and make it real. But that's a long way ahead. Pursuing it as a side project is impractical. Not only it requires more time than we have. But also we have to spend more time on things like marketing than the fun stuff that initially led us to start the project.
I'd go so far as make it an explicitly non-goal to make profit when starting a side project. You will have more fun and there will be wider range of ideas you can explore.
I'd also reduce font sizes. This is not a presentation page, but an information dump. The more there fits on a single page, the better.
One can do perfectly well without 90% of this checklist.
At the very least say something useful if you're going to do that. This is worse than those bots that spam comments on blog posts with "great content" and a link back to their spam site.
edit: I see you copied and pasted your post from the same article elsewhere. You're basically a spammer.
I think if you start off just making something for yourself, it may be hard to monetize it, because it's only optimized for how you do things. On the other hand, if you do something only for monetization, it's super easy to get demoralized and just stop. Furthermore, monetization isn't an easy to thing to optimize for - it's very hard to see where the sweetspot is.
But if you can find a right balance between creating something that you find a lot of interest in, and that a lot of other people would value, you can find a great balance of personal satisfaction and something sustainable.