Hacker News new | past | comments | ask | show | jobs | submit login
Side Project Marketing Checklist (sideprojectchecklist.com)
449 points by Zweihander on Aug 13, 2017 | hide | past | web | favorite | 68 comments

I'm always unsure about marketing. I do think that the balance between the thing you're creating, and the marketing of it is a clean 50/50 split. Creating something great is extremely important, but telling people about it is equally so. One of biggest lies is "if you build it, they will come." You actually have to grab them by the necks, and show them what you've built.

I think the second biggest lie is the opposite of the first one - that people don't care. My side projects are in game dev, and sometimes it feels like people are actively trying to ignore your stuff. The truth is, there's so many amazing, next-level games out there that being just great, isn't enough. I used to price my games at $1, because I was desperate to be heard. But the fact is, this isn't the real cost for most people. For most people, the real cost is oppurtunity cost. Yeah, I could play your game, but I'd rather spend my time playing the game that's 10x better.

Maybe not 100% related to original story, but have you seen the article of Ryan Clark about marketing your indie game? It's by far the best one I came across with the best info as far as I know.


Can you point a newbie indie gamer in the right direction to finding a good selection of quality indie games?

That list explain perfectly why I don't want to work on "marketing stuff" anymore. So much opportunity to waste time and money on trivial things.

A blog ? A "sneeze page" ? What's the point if you don't have anything interesting to write ?

A/B testing landing page and newsletter ? Is that difficult to get honest feedback from people who care ?

I feel this list is intended for full-on, VC-yearning startups. My admittedly limited experience is that if you're going for more of a small business product, you could eliminate about 75% of this list.

The rest is just "industry hacking".

What's the 25% left?

Depends on your product, I guess? For me, it's usually: make a splash page, make sure your assets (app icon, screenshots, trailers) are eye-catching and professional, write about different aspects of your product on your blog, condense your pitch down to a paragraph at most, e-mail every publication you can think of (and follow up), post in lots of topical forums and comment sections (esp. HN and subreddits), submit your product to publications that give out awards, try as much as possible to catch the eye of Apple's promo team if you're in the app business... but, caveat emptor, I'm also not great at it yet. It's a hard job.

> A/B testing landing page and newsletter ? Is that difficult to get honest feedback from people who care ?

Individual feedback doesn't replace large-scale A/B testing. If you're getting feedback from people who care (possibly implying they know you personally), it's also possible that they could deliver biased or unrepresentative feedback.

For large scale A/B testing you need a certain amount of visitors first to get the necessary statistical significance, otherwise its feedback is unrepresentative as well. A lot of side projects probably don't have enough visitors for that.

I disagree, if a person has to make a decision under uncertainty, and a priori favors neither group A or B, then they might as well use any visitor information available to them to guide their choice.

They just shouldn't be too confident they've made the correct choice.

You are just using noise then. It's not a matter of opinion, it's statistics.

If you are waiting for N observations, so that a NHST will have some level of power, and you assume each observations is drawn from the same distribution (as your test likely does), then you do not see each observation as noise.

You will just be acting under reduced certainty, but if you have to act, any information is better than no information.

(I'd be very interested to hear your statistical explanation).

The trouble is disproving the null hypothesis. In your test, if one variant beats another, you take that as a weak signal that one may be better than the other. The data doesn't support this. Without applying a standard to your p-value, you cannot disprove the null hypothesis: that your variant is likely no better or worse.

I'm not a statistician, but I've run a lot of b-tests.

You're ignoring closed's point that "a priori favors neither group A or B".

If you are starting from a neutral position, considering two possible alternatives with neither presumed to be more favourable than the other, then any statistical test based on using one outcome as null and the other as alternative hypothesis is fundamentally inappropriate. Any such test inherently favours one outcome over the other, rather than starting from a neutral position.

As closed is trying to explain, if you really do start from neutral then even a tiny number of data points is still better than no data at all. You shouldn't have too much confidence in whether you're really making the right decision, but if you have to make a decision, you are still more likely to make the right one if you go with what the data tells you, even if it's only telling you by a very small margin.

Ok so walk me through this in practice..

The way I see it, you need to prove that A is better than B by a sufficient margin to be distinguishable from pure noise.

So, imagine you put up a landing page with 2 variants. Each one gets 500 visitors. You have a conversion on one, but not the other. It's your suggestion here that there is some significance to that single conversion?

I think the problem is, you have no idea if that user would've converted had she landed on the opposite variant. That is, you can't disprove the idea that your test makes no impact at all.

You're still thinking in terms of one version being the default and the other an alternative that must be positively proven to be better. If you are in a situation where you have cases A and B and no particular reason to believe a priori that either is more likely to be better than the other, that's a fundamentally different situation.

And in that situation, yes, if you run both versions with randomised visitors and you observe a small but non-zero sample where one converted and the other did not, that is evidence that one version may be better than the other. It's not particularly strong evidence, but it is a non-zero amount of evidence in one direction over the other, and that's better than the nothing at all that you had to separate the cases to start with.

Therefore, if you must make a choice about whether to adopt one version or the other at that stage, then in the absence of any better evidence, it is more likely that the version that has converted performs better than the version that has not and logically you should adopt the one that converted.

Of course in reality you would probably prefer to collect stronger evidence before making a decision if that is possible. But if it's not then, as closed wrote before, any information is better than no information at all.

Have you ever watched a test against a lot of traffic? In variants with 50k test, 50k control each day you can see wild swings from one day to the next, until you reach statistical significance.

I think you and the other guy want that single conversion to be evidence, but in reality, it's statistical noise.

A coin flip assigned that user to that variant. If they were going to convert anyway, you will be deriving meaning from pure coin flip chance, and you have no way of knowing with a single conversion whether this is true.

Again, it's not about going in with an assumption of which is better, it's about realizing that in split testing the biggest challenge is disproving the null hypothesis.

I think you and the other guy want that single conversion to be evidence, but in reality, it's statistical noise.

It is evidence, just like any other properly collected data point. It's just very weak evidence, is what we're saying.

Of course in real world situations there may be a lot of variance and the correct answer may well turn out to be the other one. But in the absence of additional information, that is true for literally any number of samples that is less than whatever proportion of the population would give you absolute proof that your chosen answer is correct. If you have 50%-1 samples and every single one went with option A, you're still wrong if the other 50%+1 would have gone for option B.

What you're calling "noise" is an ill-defined concept. Qualitatively there is no difference for a result in a two-way test between a single sample and 50%-1. You still don't know for sure which answer is the right one. However, you're going to be much more confident about having the right answer in the latter case, which is what I think closed was trying to explain to you.

Again, it's not about going in with an assumption of which is better, it's about realizing that in split testing the biggest challenge is disproving the null hypothesis.

But if you're running a test with null and alternative hypotheses, you are going in with an a priori preference for one outcome over the other. You are literally saying that if the result is close enough, you will prefer not to reject the null hypothesis, and therefore whichever variation you have arbitrarily chosen to be your null hypothesis will be the answer.

That is self-evidently not a neutral assessment of option A vs. option B, and therefore there will be some cases where your test is more likely than not to make the wrong decision. In short, you are using an inappropriate test for the situation that closed was describing.

Alright, last comment from my side, just to clarify:

>> You are literally saying that if the result is close enough, you will prefer not to reject the null hypothesis, and therefore whichever variation you have arbitrarily chosen to be your null hypothesis will be the answer.

This is a misunderstanding. The null hypothesis is that your two variants have no statistical impact on conversion and any edge you see is just random. That is the hurdle you have to overcome to gain any useful direction from B testing.


Fair enough, my phrasing before was a little casual, but the underlying point is sound. A hypothesis test might tell you that there is no significant impact on conversion at your chosen level. However, you still have to make a choice between option A and option B. If you have no a priori reason to favour one as the default and no additional data to consider -- which, again, is a crucial detail in the situation closed was talking about -- you should logically still choose whichever option that was most successful during your experiment. This is simply because if your conclusion was correct and there is no impact on conversion then which you pick doesn't matter, but if your result was a false negative then it is more likely that the more successful option during the experiment is the better choice. Given that you're going to pick that one anyway, your hypothesis test hasn't actually provided any useful information to help inform your decision in this scenario.

In any case, we seem to be talking at cross-purposes here, so perhaps we'll have to agree to disagree on this one.

If one variant beats another, even with very few observations, the data DOES support that one is better. It's just that you might not be very confident that one is better.

The key to understanding this situation statistically is by reframing the way you think about tests away from an all-or-nothing NHST, and toward either confidence intervals, or bayesian estimation.

That is, some kind of measure of (loosely) uncertainty around a parameter (or entire model) of interest.

The question then is:

Is the available data more useful than a coin-flip, which would be the alternative method of making a decision.

On the other hand, a coin-flip is probably the better tool. If you can't generate enough data for a statistical sample, then you're probably wasting your time creating an alternative version and setting up an A/B test.

I think people understand the theory, but do most people have enough traffic on a side project to have viable A/B tests?

Well, you need a landing page so users can actually buy your product. Maybe not A/B test but you should use something like hotjar or userTrack, at least in the beginning to see if your users are having difficulties actually converting.

I dislike this list. It's too much, and does not discern between effectiveness. It's like they are equally weighted tasks. This is absolutely not the case.

Half of this is common sense, 'make an about page, make a contact page', that is basic...

Then there's a ton of stuff here that is hypothetically cool to do but practically speaking will not be productive.

As a marketer I can tell you that there is some pareto optimization that can be done here. It's likely, I think, if you did this whole checklist that you'd find 20% of your efforts ended up bringing in 80% of your conversions. The trick is finding what that chunk of extremely lucrative marketing activities are for your product/industry.

It is intentionally too much. My goal with the list was to make something exhaustive. The hard part - the reason companies have whole marketing departments - is to prioritize and execute on it.

There's definitely some common sense stuff here, but for devs who are marketing their first side project, it might be helpful to have more rather than less.

Finally, I welcome PR's on the project! It's open source and I'm looking for collaborators to help improve it. I'm a dev, not a marketer, so I'd love an industry pro to improve it.

It's a great list. And I love the little inspirational quotes above each section. I think the "common sense" items should stay, since what is common sense to one is not always common sense to another.

But akin to what others have said, it's not a checklist! I think I'd need 2 additional full-time staff on my side project working on nothing but this to implement it.

To make it more specific to side projects I think it could use a preliminary strategizing paragraph. E.g., if you only have 2 hours / week to devote to marketing, here's how to use this list...

I really liked the tools listed for each task. Usually I encounter a nice logo, newsletter, landing page, etc. and wonder how such things can be made without being distracted from the main product.

I like it. I'm in the process of putting together a marketing strategy for my SaaS product (https://usebx.com) right now, and this gives me some really good ideas to pick and choose from. Thank you :)

This is basically the advice of Traction[1], which is the best book I've come across on this topic. They recommend experimenting with acquisition channels until you find one or two that produce great results, milking them for all they're worth, then continuing to experiment in order to find the new channel(s) that will get you to the next stage.

That said, there is still value in seeing many potential channels laid out in one place, so that you can consider all of them, weigh them against each other, and potentially try out a bunch before zeroing in on the best ones.

1: https://www.amazon.com/Traction-Startup-Achieve-Explosive-Cu...

I don't view it as a 'do everything on the list' checklist. Instead, I see it as a list of options and you check the those that apply to your project. From this you generate your own tailored project checklist.

If nobody knows which 20% of tactics gets you 80% of the results for any given product, then it's good that the list covers all the possible things. By definition of that rule, you can't generally weight any of the items above the others. If I'm misreading, and you are saying you know the best 20% of tactics in general or the quickest meta-strategy for finding the 20% for a given a product, please share. That would be the hardest part, especially for the target audience of this: non-marketers/new marketers.

AARRR[1] is a good framework for your side project too, it makes you start thinking in the basic questions you should know how to answer.

The tools and implementation choices are less important IMO, for some things you could event start with a google drive doc if that makes you move forward faster.

[edit] Here a playlist from Google on how to implement AARRR in Firebase[2]

[1] https://www.slideshare.net/dmc500hats/startup-metrics-for-pi...

[2] https://www.youtube.com/playlist?list=PLl-K7zZEsYLnslvfInomP...

Missed a big step at the end of customer research:

* Use the results of your research to make sure you are building something that customers will actually want/need enough to pay for.

"6 days ago"... I'm not sure about HN rules, but isn't this submission considered a repost?

Reposts aren't considered bad automatically; sometimes a story didn't get the traction it deserves, wrong time of day or competing stories crowd it out.

okay, but in this case the original got ~600 points, which surely counts as "the traction it deserves"

It starts off nice, but past the blog part, it seems to just be mixing some low quality things with the high quality stuff.

Something like "Attend meetups or conferences for your target market" can be extremely dangerous because this is a great way to spend a lot of time accomplishing nothing just to tick something off the list. Whereas something like "cold calling 20 customers" is so important it should be in bold.

I would recommend this to be a list of marketing ideas depending on your phase, rather than an actual checklist of things to do.

it depends from the market. Cold calling can be effective in the us but should be done with a lot of attention in Germany for example, or Europe in general. Here we are all much more skeptical towards companies or individuals that call you to offer something. Most of the time, even if the call is perfectly crafted and polite you would be perceived as an obnoxious spammer. Didn't want to diminish your comment, because I agree completely, but I think it's important to point that out for readers who might not operate in the States

Yeah, you will get shot down anywhere but sometimes you just have to do the least comfortable thing.

I mean for example if your project is some plugin that makes pretty graphs. What you can do is email strangers screenshots between your plugin's graphs and the graphs they're using now. No pressure to buy, just a cold email suggesting it could help them.

That is just going to be way more effective than putting up some ads on Instagram.

I guess what I was trying to say is that if faced with something in checklist format, you tend to do the easiest things first, instead of the scary ones with the most impact.

I don't think it's really a US thing. I think it really depends on who your target customer is and how they're likely to feel about a cold phone call.

Also cold outreach doesn't have to be over the phone. Email, Twitter, FB Messenger, LinkedIn mail, etc. could work better.

I don't think "customer" in the list for cold calling was consumers. It was probably businesses which are a lot more receptive to cold calls/emails.

Agree, its not common in UK/Ireland so if someone does it, I reckon 90% of people just polietly try to hangup quickly

And the remaining 10% will give a tirade of verbal abuse before hanging up - cold calling in the UK is not welcome.

cold calling is not welcome anywhere.

Hello this list is too comprehensive and exhausting. Could somebody filter or sort this list by importance of each task? Ie you must absolutely do these tasks but if you skip these other tasks no big deal.

The problem with filtering the list to only what is most important or necessary is impossible, but none of them are required.

I have launched hundreds of products for myself and clients, and followed a formula for each one, and I believe it had about as good a chance of a successful launch as having a monkey throw darts at a wall. You always end up analyzing what you did when your launch succeeded, and your resulting formula is just the Texas Sharpshooter Fallacy with checkboxes.

The only real thing that truly helps a launch is having an audience already. It might not make the product succeed, but it will at least aid in getting the initial signups and feedback.

I have also launched "free" products to no audience of my own, and instead found them in forums. After posting about the product, people typically would sign up in droves. And once I had a critical mass of sign ups, I then had an audience to market to. That would then lead me to selling a bigger version that we were giving away for free [1]. I call this piggybacking.

There are a lot of "hacks" you can do to help your launch along, but I don't believe any of them are better than having a baked in audience already [2].

1: http://jeremyaboyd.com/my-first-product-launches/

2: http://jeremyaboyd.com/tricks-to-monetize-your-side-projects...

I suspect you are right, there is a large component of luck, timing, and built in audience.

A lot of these types of tips are almost cargo cult type checklists.

OP is thorough though and fairly comprehensive - it is interesting to read your thoughts on this too as someone who has used these techniques.

I agree, it is a very thorough.

But as I have found in consulting many "side-project" developers is they want a condensed list of at most 5 things they can do to guarantee success.

Since on top of building the product themselves, they are also the ones having to build the email list, build the marketing campaign, etc. Like grandparent, they are looking for a faster list to check off.

So my fast list is:

[ ] Piggyback on an existing audience.

My goal with the list was to make something exhaustive. Your job as the project owner is to cull it down to the most important items based on your target market and goals.

Making the list was easy. Prioritizing it and executing are what makes or breaks your project.

> [ ] Record/post video of you reading the post on YouTube.

Lol, really? I don't see the benefit of doing that. It's way faster to read text than to watch a video.

You're assuming that speed of consuming information is the goal. Plenty of people prefer videos over text because it communicates emotions and feelings better. Others remember more from people talking to them versus reading text on a page.

Sometimes reading is inconvenient, for example during commute while driving I listen to YouTube videos.

It does help with SEO, if you give a link back to your site :)

and it wastes your time since you could be doing somethIng more productive instead or rehashing the same thing.

Wouldn't this list be completely different for b2b vs consumer sites?

Marketing is a must for an idea to become a successful business. However most side projects are much more about learning and exploration.

It's easy to get excited about our side project and make it real. But that's a long way ahead. Pursuing it as a side project is impractical. Not only it requires more time than we have. But also we have to spend more time on things like marketing than the fun stuff that initially led us to start the project.

I'd go so far as make it an explicitly non-goal to make profit when starting a side project. You will have more fun and there will be wider range of ideas you can explore.

I would like to see a side project finance checklist.

Perhaps replace both subscription forms with a single link in the header? They take too much space and are distracting.

I'd also reduce font sizes. This is not a presentation page, but an information dump. The more there fits on a single page, the better.

Love the list, quite comprehensive. To any beginner or even pro - my advice would be to cut out 80% items from the checklist and nail down the 20%. Your job is to which which is the 20% which will give you 80% bang for your buck.

Things like this may scare you to hell from starting anything at all.

One can do perfectly well without 90% of this checklist.

Only the marketing part of this "side" project feel like more than a full project to me.


As a "online marketing veteran", you don't have anything critical of this list to add, or anything constructive? Did you even read the list, or are you just trying to drum up customers to your services and get some SEO action?

At the very least say something useful if you're going to do that. This is worse than those bots that spam comments on blog posts with "great content" and a link back to their spam site.

edit: I see you copied and pasted your post from the same article elsewhere. You're basically a spammer.


Monetizing, check, your, check, passion, check, kills it, period.

Agree and disagree. Monetization is on one axis, passion is on another.

I think if you start off just making something for yourself, it may be hard to monetize it, because it's only optimized for how you do things. On the other hand, if you do something only for monetization, it's super easy to get demoralized and just stop. Furthermore, monetization isn't an easy to thing to optimize for - it's very hard to see where the sweetspot is.

But if you can find a right balance between creating something that you find a lot of interest in, and that a lot of other people would value, you can find a great balance of personal satisfaction and something sustainable.

I just find that whenever I optimize for money; I start making weird decisions; decisions meant to please others, who may not share the passion. I'm all for people paying me to create and share the stuff that I already am; but then it needs to come without conditions, otherwise we're back to square one.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact