Hacker News new | past | comments | ask | show | jobs | submit login
It’s worth spending weeks on research before wasting years on a hopeless project (ottofeller.com)
248 points by gvidon 73 days ago | hide | past | web | favorite | 87 comments

I barely get a day or two to do some research on a new piece of work e.g. scope the tooling, mental model, checking how well the third-party components integrate etc before I need to provide an "update" at the status check meeting on "where we are". If you say that you are researching more than once, you are told "if you are facing difficulties, maybe xyz can have a look at it". It makes you feel that you are being labelled as slow/not quick enough and basically comes across almost like a threat of your work being assigned to somebody else who doesn't need to research as much.

A lot of this comes from hard lessons learned at the business level. If you're ultimately responsible for the success of a project with any significant level of uncertainty, you very quickly learn that any significant level of uncertainty means that things are probably not going to work out well. This becomes a knee-jerk fear response whenever any report says that things are not going according to plan, where the immediate impulse is to give that work item to someone who promises to make it go according to plan.

Very quickly leads to a race to the bottom where executives settle on tried-and-true strategies for driving revenue: give Google & Facebook lots of money for paid leads, spend a lot on salespeople to convert potential customers, nickel & dime them for everything, work your engineers to the bone to deliver the features that salespeople promise, and flip the company before the accumulated technical, management, and oftentimes financial debt kills it. Ultimately, if you want durable success, you need to take some strategic risks and invest in projects where you don't know the outcome before starting. But that goes against nearly every emotional circuit in how we're wired, so few executives can do it.

I agree financial debt can kill a company. I'm not familiar with what management debt is.

But I don't know of any companies that were killed by technical debt.

Management debt is when you create social or organizational policies that solve the immediate problem but will create morale, personnel, or legal issues later. The concept comes from an a16z essay:


Other examples might include tolerating an aggressive status-based "brogrammer" culture, institutionalizing long working hours, or instituting gatekeepers that can block launches to avoid specific issues that have bitten you in the past.

I've worked at two startups that were killed by technical debt, and founded one. The way it usually manifests is that the startup fails to get to product/market fit before the complexity of its codebase prevents any further progress. These failures usually happen early in the startup's lifecycle (before many people have heard about them) and are often chalked up to failure to find a market, but the reality is often that if they could've iterated in days instead of months they would've found a winning product. Frequently a competitor started a few years later comes in with a simpler approach, leveraging building blocks that have already gotten mainstream vetting, and takes the market. (For more prominent examples, there's projects like General Magic, Apple Copland, Windows Longhorn, Chandler, and Friendster.)

That’s because when technical debt kills a company, they attribute it to something irrelevant. Imagine Wikipedia, for example, and the great “Wikipedia has Cancer” post[1]. You might conclude, if Wikipedia fails, that it couldn’t secure enough funding, even if a significant amount of time and money was spent on dealing with operational issues that shouldn’t exist. In my experience with Mediawiki and scaling up a wiki, I feel that it suffers intolerably from technical debt, even being state of the art as far as wikis go.

Here’s a good example. Many modern websites have task queues to perform background tasks. So, too, does Mediawiki. You may assume it uses RabbitMQ or another queueing software, but nope, it stores the queue in MySQL (acceptable) and by default it runs cronjobs at the end of requests (less acceptable.) As your wiki grows, these jobs take longer and longer and happen more and more. Suddenly you are wondering why pages hang and you always run out of PHP FPM workers.

Then you look at the stack traces and it makes more sense... so you go to configure cronjobs. Except, there’s not really a great fix. You can tune the parameters so that the queue effectively never progresses, but then you have to run the jobs manually - which is fine, but there’s no task running daemon or anything. Actual cronjobs work but are catastrophic for responsiveness. So... one of the recommended solutions is a shell script that loops and runs the runJobs.php file repeatedly. This solution works, but from an operations standpoint it’s not a wonderful solution. You probably want at least some visibility into what’s going on, and bash scripting is hardly the best platform for that kind of thing.

Of course, this is just one microcosm of Mediawiki, you can run into one technical debt related issue per day and you’d not run out for years.

[1]: https://en.m.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost...

Companies with high technical debt don't die suddenly they are prevented from adding new features. What kills the company is a change in the world that the system cannot adapt to because the debt.

Well if the debt prevents you from acquiring new customers since you can't add more features then that would certainly cause most businesses to die.

> I agree financial debt can kill a company. I'm not familiar with what management debt is.

> But I don't know of any companies that were killed by technical debt.

I would imagine the failures of those companies are attributed to other causes. Is it hard to imagine a company with a low bus factor being in an unrecoverable state if a key person leaves? The time it takes to get a new person up to speed could cause the business to cede ground to a competitor, just as a for instance.

Technical debt doesn't (usually) cause "quick death" to companies.

I think a good analogy is that technical debt is like having a poor diet - if you don't fix it you end up with diabetes and other issues. Sure they're treatable, but it becomes harder and harder to continue innovating.

When all is said and done, you don't know for sure if that poor diet was what lead to a companies demise - but there's no way it helped them.

It's not the prima facie cause but a lot of issues can be traced back to technical debt. Being out maneuvered by a competitor and just losing customers can be because it was too difficult to produce new features due to technical debt.

Take a look at Knight Capital for a case study -- tl;dr, they deployed an update that accidentally triggered a blind spot in their trading system which had been inactive for almost a decade. By the time their engineers were able to stop it, Knight was on the hook for several billion dollars' worth of trades and had to close their positions for a loss of almost half a billion dollars. From my understanding they ended up merging with another firm to cover their losses so while they weren't entirely "killed", they were still severely kneecapped by tech debt.

A lot of this is work culture. You can push back by trying to formalize the research phase. Saying "I'm working on the architecture/design document, we have some known unknowns regarding [foo] I'm hammering out" shows other people in the meeting that you are working towards their problem in a meaningful and structured way.

Ultimately, if the people in the org want to push the work to xyz instead I would let them. Maybe xyz is as good as they think, but probably everyone in that room will enter a slog project/feature.

When someone makes a threat your default behavior shouldn't be to back down.

Architecture decision records (ADR) are an excellent way to show managers that work is proceeding, and that research is necessary, structured, timeboxed, and transparent.


This is a good answer.

What managers dislike is a lack of transparency. "Research" is very vague. How can anyone tell what you were researching, or what the result of it was?

A very clear spec doc is a useful tool - almost like writing an essay - to demonstrate what your research uncovered.

Agreed on the structure part: structuring research is incredibly helpful for guiding the research itself, knowing what’s left to do, and the nice bonus of also making it easier to share the results, both incremental and final.

Research is in many ways naturally more unstructured than other dev tasks and people don’t get too much practice at doing or delegating it, so it can be fertile ground for organizational misunderstandings. Structure agreed upon up front can help all that.

I think they key word in my post wasn't "structured" but "shows". Even if the occasion calls for 100% unstructured research (let's spend some time in the sandbox to see what we can do with some new software) it's a critical skill to be able to communicate what you learned during that phase with other people, even people with different job titles and backgrounds.

I suggested formalizing and structure as words because new features/products/projects are a common enough occurrence that we have standard procedures for it.

definitely needed this, thanks!

Speaking from the other side, my advice is to over-communicate, especially if you haven't built up a deep trust and reputation with your team and your manager.

Here's what your manager is worried about:

  1. Will the job be done well?
  2. How long will this take?
  3. Are you on-task, or are you stuck, overwhelmed, going rogue?
You can help your manager answer these questions by communicating these things:

  1. I have a plan, and here is the plan.
  2. Here is my progress in executing this plan.
  3. I have made these findings so far. I anticipate 
  these difficulties and challenges.
How this is communicated to the team, and your manager, depends on your team culture and workflow. But something typical might be:

  1. Two sentences in standup. Think through what you are 
  going to say, read a prepared statement from a sticky 
  note if necessary.
  2. A short email (<100 words) answering any questions 
  people have regarding your research. Use your own 
  judgment as to audience size and email frequency.
  3. A document or wiki page that is the final work 
  product of your research.

This is a wonderfully concise summary of the situation with concrete and helpful advice! This advice applies not only to research but equally to any longer-running project or task status update as well.

As someone who has felt frustrated in the past in both roles at play here, I expect I will find myself sharing this comment with others soon and often. Thank you!

I find it's best to think of and also communicate the research phase as stage of the project itself with a report at the end as a deliverable.

At the status meeting, talk about the stages you've accomplished so far ("I've completed scoping, looked at these three third-party components, etc").

When the research stage is done, write up a research report with what you did and the findings and present it to the team. This gives the rest of the team visibility into what you're doing. It's also very helpful months later when you question "why did we pick Foo?" because you can refer back and know exactly why you did.

I can't tell you how many times I've been on the team and decided on X, and years latter someone asks why not Y. I can remember many discussions and Y was part of it, there often were good reasons not Y - but I no longer remember them.

> think of and also communicate the research phase as stage of the project itself with a report at the end as a deliverable

Yes. This is the classic funded feasibility study / scoping study phase. The report deliverable can include a price estimate and also a risk estimate, e.g. expressed as the cost of additional rework, especially where there is a residual uncertainty (e.g. the use of a new component / framework, etc) at the end of the initial study. How companies manage their risk 'pots' is extremely telling.

In some situations there can be additional breakpoints, sometimes contractually, if third parties / subcontractors are involved.

‘Architecture Decision Records’ are one pattern of attempting to document these discussions in a lightweight manner.


This is helpful. Thanks for this.

I think a lot of famous programmers did good work because they had the freedom to explore things and iterate until they had something that was good. In my company you often have to explain yourself after a day or two so you feel constantly guilty because you didn’t produce anything.

But my best work was often done when me and others had weeks or months to play with things without interruption until we found an approach that really worked.

> If you say that you are researching more than once, you are told "if you are facing difficulties, maybe xyz can have a look at it".


A thousand times, this.

I think a lot of times that's just a failure of communicating progress on research. Like others have said you can break 'research' down into sub tasks and even present a document or report at the end.

> present a document or report at the end.

Yep. Our research tasks either produce a document for the team to read and then discuss and/or user stories for the team to discuss, point and work on. Which happens depends on what the research is about.

Also, if someone is a manager/lead, make it a point to read, understand and ask questions about the produced document. Nothing demotivates more than producing something that isn't used.

When someone questions my work, effort or results from what I feel is an illegitimate place (ex: they're just trying to score easy political points) I find the easiest way to resolve the situation is to ask them to take on some non-trivial component of the work and share it with the audience at a future date. Nothing shuts down jerk disrupters like the potential of performing hard work.

I strongly agree with this, and want to add one wrinkle.

Depending on how you count, I have somewhere around 30Y of paid coding experience. It took me until maybe 5 years ago to have discovered this for myself.

One thing I've noticed, though, is that if you create a culture around this, you'd must have guide rails in place. Otherwise, in the wrong company, or people will use the culture to present shoddy work with the expectation that other (senior or not) developers will challenge. These people exploit the culture by using shoddy implementations and challenges to offload hard parts of their projects. This is the mirror image of the illegitimate questions: illegitimate claims.

As a senior engineer, you should ALWAYS be willing to lead from the front with code deliverables, but your review-to-code ratio is so skewed that the number of times you can satisfy that kind of challenge is low compared to the number of times when you might offer (sometimes negative) input.

What if they offer to rewrite your components as well to ensure quality across the board. Could lose a project that way.

can you give an example on how they are questioning you?

You need to better presenting your research. If you've done two days of research presumably you have 10+ hours of work to summarize.

"I talked to person A, who said we need featured X,Y, and Z. Then I talked to person B, who said they need A, B, and C, but feature C and Y aren't compatible with each other, so I'll need to reconcile that".


"We'll need to use third-party products A, B, and C. B has the API we need, but I'm still waiting to hear back from the C's vendor to see if they support API call foo."

>> I talked to person A, who said... Then I talked to person B, who said...

this has the added benefit of discouraging most developers from being willing to take over the task

> e.g. scope the tooling, mental model, checking how well the third-party components integrate etc

This is very insightful - what if you first created a formal plan with a bunch of checkboxes so that your status audience can see a steady moving progress bar. I've been there and agree if you keep saying the same thing people start to get nervous.

> If you say that you are researching more than once

Then maybe it's just that you aren't specific enough?

You aren't just researching, so tell them what you did, what you are doing and what you plan to do instead.

I had to implement SAML quickly on our software once. It's a subject that needs quite a bit of research and ideally not much implementation. So here a timeline of what I would have said if you asked me an update regularly (which is probably quite close to what happened because we needed that for a demo pretty quickly and they asked me quite a bit of update):

    T + 5m: I'm still trying to figure out what is SAML. 
    T + 1h: I found what is SAML, but I am trying to understand how it works technically. 
    T + 2h: I'm looking for libraries to help me implements it because it's too complex to implements in house and not worth it 
    T + 4h: I found libraries, but I lost a few hours because it needs to be implemented as a login-module, not a library.
    T + 5h: I've found one that has potential but our version of J2EE is too outdated and it may not works. I'll need to experiment with it. 
At this point I gave 5 updates, I just did research, but never I said the word research. Each time what I said was justified and it doesn't sound unreasonable.

At my work we use what are called spikes to indicate that research is required and at the end of the spike you must have some documentation or a proof of concept produced so other people don't have to do the same research again. Unfortunately with your job it appears that the problem might be more that business relies on what they deem 10x programmers instead of creating a collaborative environment. They should be saying, "Hey if you're having difficulties maybe you and xyz can look at it together". If developer xyz comes to the same conclusion as you then business should can it at that point. If they don't then it could be that you were wrong and having another set of eyes on it helped out a lot, or it could be due to weird competitive culture where people want to take credit for things. I try to avoid this as much as possible by discouraging usage of the word I during any team discussions. It's almost always should be We. If one person fails, everyone fails. If one person does good, everyone does good.

As a PM, for any big project I always try to get a good developer a full two week sprint to do nothing but whatever research he or she feels is appropriate after I have a basic framework of the functionality built out.

I'm convinced that 100% of the time, this saves time. The problem is it's tough to quantify the time saved from mistakes or mid-course changes that you would have made without having this time, and it's not easy to get execs to give up two weeks of time from a senior engineer who will really do a good job at this.

Obviously anecdotes are everywhere, but at my work I never had any issue saying "we will need 2 weeks for research on how to use this technology", if you say you need 2 weeks to assess something then you need 2 weeks. What, my producer is going to tell me that I need less time? Based on what?

I think your issue is that you're doing open ended research - if you don't specify upfront how much time you need to investigate something then no wonder you're being asked for updates after 2 days.

I have met many doctors who say they did essentially no research whatsoever on what being a doctor really entails and what it's like. They spend four years in school racking up six figures in debt, then three years in residency, based on nebulous ideas and media commentary. That is part of the driving reason why I wrote this: https://jakeseliger.com/2012/10/20/why-you-should-become-a-n...

See also:

The Law School Scam (The Atlantic, 2014)


US grad school is the same scam. There’s an argument that only foreigners should compete in it for the green card.

This is one of those pieces of startup advice that once I learned about it, I started seeing lots of connections to my own life. I think one of the more dangerous things about doing something on your own is the risk that you'll go down the rat hole on something unprofitable or unproductive.

Consider that you spend a huge potion of your life on very well-worn paths that were cut by others. From the subjects you study in school to the business models you drive forward at work, everything has been done before by someone else and you are only doing it now because of a sort of survivor bias. The bad ideas and bad approaches have failed or been discarded and the better ideas have turned into Fortune 500 companies and academic departments. Walking those proven paths, you just have to do a reasonable job, follow the formula, and you're not likely to fail.

Now lets say you've struck out on your own and you want to do something new that no one's ever been successful at before. Now you're essentially guaranteed failure. There's no more formula to follow. There's no more boss or teacher to tell you you're not on the right track anymore. You can no longer just plug into an existing money-making machine and start spinning, you have to build one from scratch. The best strategy is to get out of failing situations fast before they swallow you up.

An analogy from skiing: When you're on the marked trails, everything has a name and difficulty rating so you know what to expect in advance. Hazards have been roped off and the trail is guaranteed to lead to a lodge or a lift. This is life within school and companies. Then you go off-piste and out of bounds. There's no longer any ropes or trail names or safety patrol, just a wild mountain. Some people have been through here before as you can see from the ski tracks in the snow. If you follow those tracks you might be ok. They will probably lead somewhere safe like a road where you can hitch-hike or a valley you can traverse back to the resort. But if you turn instead into the fresh snow where no-one has tracked before, you're in very dangerous territory. Who knows where this leads, hypothermia in a wilderness area, under an avalanche or very likely off a cliff. But it's a whole mountain of fresh snow just for you. This is entrepreneurship.

I wish more projects would start with the old grad school approach of doin’ the literature review. Swashbuckling your way into a project usually means falling short of prior art. At a minimum it’s good to know what pitfalls are waiting for you before embarking.

A quote that stuck with me:

"A couple of months in the laboratory can frequently save a couple of hours in the library. "[0]

[0]: https://en.m.wikiquote.org/wiki/Frank_Westheimer

The programmers creed variant:

"Weeks of coding saves hours of planning."

Hours of debugging can save minutes of reading the documentation

Yes, but sometimes research is just safe procrastination and more can be learned, more quickly, by jumping into the unknown and trying to figure it out.

How does one differentiate between the two? The mind is very sneaky is it not?

Indeed. I theorize that the best approach is a hybrid: jump in, figure out, write something, but step back periodically and read what is out there. True understanding comes from the interplay of these two activities.

That's the question all those people should be focusing on. Because it's obvious that both stances are correct, yet here we are, discussing an article that pushes one extreme like if it was a natural truth.

First figure out what you need to know, and stop planning /researching when you know it.

If the fact that something could be done badly is a reason for not doing it, nothing would be done - that's the ultimate procrastination.

As someone who never went to grad school: how would a software consultant or freelancer approach this?

Most times I wouldn’t say it’s important that you go through exactly the same formal process that you’d do if you were planning to publish your own peer-reviewed article.

However, in a software engineering context, let’s say that you’re implementing a new library to do something in a new programming language. If other, more established languages had popular libraries that did something similar, I’d systematically catalog what those libraries are, read their user docs, maybe read their source a bit, and certainly try to develop an understanding of what I thought was the right approach.

In your own readme, including this information up front, along with why you chose to implement the way you did almost certainly advances the state of the practice.

At a minimum you’ll have a record of what influenced your thinking so that, if you come back in the future, you’ll know what other threads may have advanced in the meantime.

I’ve thought of this as a way to contribute to open source that isn’t writing code or docs after the fact. Like, imagine you’re a Racket user and you wish there were a library to do X: maybe it’s a worthy contribution just to start a github repository and document the libraries that do something similar, and what you see as good and bad about those approaches. Maybe someone with the bandwidth to write the software will see your documentation and find that it gives them a head start.

You should at least search Google, GitHub, Maven, NPM, etc. for libraries, SaaS vendors, or published algorithms that do what you're attempting to. "The best line of code is the one you didn't have to write."

Though there's a fair bit of subtlety in using them. There's nothing quite as bad as finding a framework that does 95% of what you want it to and will save you years of development effort, building your architecture around it, then getting 80% into development and finding out that there's no way to make it do the remaining 5% reliably and its developers have abandoned it precisely because of that shortcoming.

I dunno.. I would ask your advisor for the summary short cut. Literature surveys are kind of a waste of time. You really want to work in a space that has no literature yet! The problem with reading papers is you end up wasting time reading papers!

Kaibeezy’s Unique Business Model Axiom: If you have an idea, somebody else already had it.

Kaibeezy’s Unique Business Model Corollary: Don’t let that stop you.

So many people think they have to have the “killer” version in a category, or they collapse with disappointment when they find out someone else already had their amazing idea. Amateurs.

Existing solutions prove a market exists. Take the opportunity to see if the market is interested in your particular formulation. If you can carve out a niche and keep your overhead low, it could become viable or even quietly successful.

This axiom sounds contradictory. Maybe if you add the word "probably" someone else had it. Because the fact is that someone HAD to have the idea first, so it's impossible to always apply this axiom. But I get the idea

As a modestly successful inventor, I get buttonholed all the time by people wanting advice about their ideas. Some may even be unique, but it doesn’t matter. Uniqueness is far from the major factor in doing something successful with an idea. Yes, the axiom is an intentional exaggeration—cold water in the face—to get their attention and refocus it.

Uniqueness is if anything an anti-pattern to success. If you are really unique that probably means there is no market - either that or there would be a market but the problem is not solved and you are unlikely to make progress.

Even if there isn’t something like the idea, if the problem exists, people are doing workarounds to solve their problem. This is what you need to be looking at.

"very likely" would work better

I worked on a project where we did the research and it came back saying "meh", or "no", or "only if it had this impossible feature".

Our management just discounted the research, "we asked the wrong population" etc, and drove forward with their vision

Approx $100m later the project was quietly folded, senior managers who had been promoted during the process quietly moved to pastures new, everyone else given 6 weeks to find something else.

I saw the warning signs and left before the crunch, but in hindsight I had concerns that weren't answered from day 1 but I let my enthusiasm allow me to ignore them

A learning experience

Tried to view this in Firefox...

Warning: Potential Security Risk Ahead

Firefox detected a potential security threat and did not continue to ottofeller.com. If you visit this site, attackers could try to steal information like your passwords, emails, or credit card details.

Ff mobile says:

ottofeller.com uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The server might not be sending the appropriate intermediate certificates. An additional root certificate may need to be imported. Error code: SEC_ERROR_UNKNOWN_ISSUER

But didn't name the issuer. Why?! Likely either a fringe issuer or self-signed cert

It claims to be Sectigo. Strange. I've switched my team's projects to Let's Encrypt to avoid dealing with Comodo (now Sectigo?).

Oh, crap. I can see the issue in FF68. Sorry about that!

Which version of the FF do you use?

Looks like you got this worked out in a sister thread, but yes 68.0.1.

This actually doesn’t work out so nicely in real life, and I can tell from experience. It’s one of those things that sounds obvious, but neglects context.

The most valuable projects are always going to be a race. If all your competitors are spending time carefully researching the path they will take, there is an advantage to be gained by skipping research and jumping straight in. If the idea of your project has merit, then competitors will be fumbling to catch up to you, and if the idea was a dud, then you wasted time and money while your competitors sigh in relief and talk sideways about what a fool you were.

In these scenarios, the only glory to be gained is by the ones who skip the research and move forward boldly. Moving boldly for some is just dumb luck, and for others it’s intuition.

Businesses and startups look for people with intuition, capable of finding repeatable success through some unknown heuristics they have assembled together with their life experience; essentially this is research, but not deliberate research, instead it is a subconscious research.

These intuitive people do not need to spend weeks going out into the world and talking to people about a project or reading statistics and case studies. They could simply roll their eyes into the back of their head and draw from a deep well of wisdom to determine if a project is worth pursuing. They give you their answer and that’s it; it’s the final word. A business with one of these people in its ranks has a tremendous advantage in the speed at which it moves, unburdened by the need to do research or question the premise of its ideas.

Many people in the valley like to pitch themselves as being one of these people to proselytize a following of investors or employees, but almost all are frauds. You need a mind that has depth and breadth. A person who’s lived all their life in the valley or has simply hopped from one tech job to another is unlikely to ever be one of these people. It takes a very wide range of expensive experiences to become one.

Yes, as the more time invested into research, the less time and the greater accuracy of planning. But like all multivariable situations, a balance is needed. With that planning feeds back into research in that you may hit a cut-off point, that don't mean research stops, it just changes the focus and future research will come into version 2.... As is with software, you hit a time in which you would love to add this and that and tweak this and that, but those end up being patched or version 2.... later on. Like everything, it is a fine balance, no cookie cutter template or formula as every situation is different. But they have comparable factors that you only learn over time and then in niche area's. You also learn to get those balances better and with that, plan ahead more.

But do put a price on your passion, more so if you plan to combine that with work. As the later can kill the former just as much as the other way around, get that balance right in a way that works for you.

Say you are looking to start a SaaS platform.

What research can you do other than look at web traffic estimates of competitors? It'll be obvious who is big + who is small, but it doesn't help understand... "are these companies actually profitable? What's retention like?"

There isn't a good way to know for sure that something is going to work, but there is alot of research you can do that will give you good information.

1. Cold email/linkedin message then speak to 100 potential customers. Attempt presales. You don't actually have to get presales, but that process should give you a good indication for willingness to pay / desired functionality, how people are currently working on the problem, and what language they use to describe their pain (quoting that back to customers is good marketing copy)

2. Build a landing page, run facebook / google ads, find out how much it costs to acquire an email address. Your real CAC will be higher of course, but this is a good starting point.

I'm a founder of https://www.saturncloud.io/. I started with #2, and found it cost 70 cents in advertising revenue to get a click, ~7$ for an email address, and ~$100-$200 to get someone to signup. When we started we were attempting a low touch sales model.

Since then we've started focusing on enterprise sales, because the willingness to pay is much higher. We did #1 when moving towards enterprise sales, but should have done that in the beginning.

I don't think someone will be totally honest to you while saying if they are profitable or not. Before launching SaaS first thing I would do would sell it to someone even before the product is ready :)

I like the approach Seth Godin talks about in his book This is Marketing. He uses a 4 quadrant chart to plot two qualities of a product. Then plots where competitors fall on those two qualities. For instance cost and durability. Some competitors could be positioned differently say expensive and durable and perhaps there is no competitor that fills the cheap part of the graph. Its a nice way to visualize potential opportunities

You start with the root of the problem and work from there.

Know that while competitors might have it right, they also might have it wrong.

Talk to people.

It's literally worth spending weeks doing anything to avoid wasting years doing anything.

We were taught this in grad school- often simply doing little toy problems on the back of an envelope can convince you that some approach is unlikely to be fruitful.

I thought this was a good idea, although after a while I noticed that a number of good ideas ended up getting rejected and decade+ later somebody else tried the same thing and managed to get a Science or Nature paper out of it.

With this approach you’re minimizing false positives (wasting time on useless projects) not false negatives (ignoring useful ideas) so it’s not surprising that some ideas you discarded are then successfully explored by others.

There isn't much you can really do when it comes to exploring more useful and potentially successful ideas, because for you (doing the exploring) they will be unknown unknowns, unless you've somehow stumbled upon the domain area before.

if only we had a way to truly estimate the success of a project- many of the back-of-the-envelope calculations were based on poor assumptions (assume a spherical cow, etc)

that someone got a Science or Nature paper out of it doesn't necessarily make it a good, or novel, idea

This reminds me of a quote from Mike Williams (one of the Erlang fathers):

> If you don’t make experiments before starting a project, then your whole project will be an experiment.

Is this the corollary of:

Remember, a few hours of trial and error can save you several minutes of looking at the README/instructions/manual.

Alternatively, stop looking for reasons not to start something. Start and pivot as necessary. Barrier is low on a side project and higher with someone else’s time and money obviously.

webkit-user-select: none is fucking cancer

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact