From my perspective - speed is about 2 things: more smaller iterations, confirming you're working toward a desired outcome.
Over my career, I've been surprised multiple times when I presented early draft to a stakeholder and they said, "oh, that's great, I've got what I need now"...and this was maybe 1/3 of the effort I was planning for.
The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.
Along these lines, in my work with stuck startup founders, I often ask, "if we were going to demo a prototype to a customer (in 2 days | at the end of the week), what would it be?"
I cannot possibly emphasize this more strongly. It's so important to deliver complete and working products quickly and regularly, that it's worth the cost of the occasional lost customer who leaves because all of their needs aren't met immediately. A) they're rarer than people seem to think and, B) the people you gain due to this incremental approach more than cover the losses.
That said, some devs interpret "quick" to mean "sloppy", which is not correct. When you can cut releases hourly the cost of some kinds of bugs does go down, but fewer bugs is better, and as always some categories of bugs are, "never, ever" (e.g. data corruption/leak, security bugs, outage inducing bugs).
My highly specialised sarcasm sensors (honed in the UK) suggest that this is not in fact your current deliberate strategy; that said, I have worked for so many groups and companies where that is exactly what happens, even if they think they're planning for something else to happen, and it's obvious to pretty much everyone involved that that is what is happening and will keep happening.
I went straight from hobby programming to freelancing, so I've been "rediscovering" best practices the hard way.
I did actually intend to do things properly, but client asked "can it be done faster" and I thought "sure I'll just ship fast and then rewrite it later"... ha! Project became so unmaintainable, development ground to a halt. (I am doing the rewrite now...)
This is the side of software where soft skills are critical. Navigating when and how to say no can be extremely difficult, but it’s arguably crucial to operating effectively and sustainably.
Early in my career, everything was a yes and it cost me a lot of sanity. It rarely cost me clients because I was so diligent about keeping things on the rails, but that seemed to come with an inversely proportional cost to my well-being.
It took me a long time to realize that saying no is kind of like saying yes; you’re saying yes to getting the project done on a reasonable timeline and with the budget the client has, and most importantly, with your happiness intact. Saying yes at the wrong time can be saying no to finishing the project at all. It’s the right, and in a sense “nice” thing to do for everyone involved.
If you’re a chronic people pleaser, this kind of work can be extremely taxing (speaking from experience).
More to the point, it's much better than delivering incomplete yet working products quickly and regularly, complete yet broken products quickly and regularly, complete and working products slowly and regularly or complete and working products quickly and irregularly.
> That said, some devs interpret "quick" to mean "sloppy", which is not correct.
Totally agree, but I've seen some devs completely melt when we need to go fast/iterate. For them, they really do produce sloppy work when they go too quickly. Many "big thinker" types are like this. Not defending it, just an observation.
With a good sales team, you can land customers who don’t see all the features they need, because you’re routinely delivering a solid if simple product that gets major new functionality a few times a year and if you just hold on we will get your feature to the top of the list.
Those paying customers are where your MRR comes from. Another case of perfect being the enemy of good.
As a customer of startups, I really abhor this attitude. Especially when it lingers for years after the initial product launch. Which is shockingly common.
For example, partially complete features are often completely abandoned for years and years, because they're "good enough." Unfortunately, good enough, often means unhappy customers actively looking for new competitors.
One company in particular, I absolutely despise, because of this. They are pretty much only focused on releasing new features. The old features, while functional, are in desperate need of improvement. (In many very obvious ways.)
This particular company had a good IPO. Their stock then dropped 90%. I wonder why.
> They are pretty much only focused on releasing new features.
My guess is that behavior is NOT motivated by the theory "deliver early drafts with more smaller iterations", but is instead motivated by the theory "the people who decide to buy our products do it based on feature checklists, rather than quality of anything."
These are actually not even aligned theories. The "smaller iterations with early drafts" appraoch is best done with fewer features included, not more features that are incomplete.
Many markets incentivize companies to deliver crappy software, and it is frustrating, but not, I think the fault of an agile/iterative/deliver-early-and-often approach. If you can make the most money by delivering a giant feature list of crappily implemented poorly-thought out features that don't fit well together, you'll tend to do that, regardless of theory of project management.
Having been on the other side of this - often times our users are unhappy with the unfinished features, but our customers are delighted.
That is to say, the CxO or director we've sold to has everything on their checklist and is getting "good enough" results out of their organization. Our job is to understand which of the unfinished features will cause grumbling and which will motivate users to convince an executive to switch to a competitor. It's very unusual for the former to ever be worth prioritizing.
Yeah it is the sad but true state of affairs. Bad for users is OK as long as it makes money. But without pleasing the people you sell to how do you compete?
I think you misunderstood.
His point is that the user who uses the product can be different from the one decides and pays for it (often the case in B2B).
So you actually pleases the people you sell to, just not this specific type of users.
> Unfortunately, good enough, often means unhappy customers actively looking for new competitors.
Here's the thing though - it's your intuition that customers are unhappy, but the startup in question has actual data. It's entirely possible that a small group of people similar to you are unhappy but for a majority of the userbase the feature has gone as far as it needs to. Resources are scarce and priorities need to be changed, which sometimes means making hard decisions.
We don't know that, actually; you assume that the company has the data, and speculate that they interpret it correctly. But in practice, "customer loves using the app" and "customer puts up with the app but will replace it as soon as they find anything else" look the same right up until they don't, just as "high interaction" presents the same data as "app is disorganized and inefficient".
Usually the company is quite aware the feature is half finished, but the data shows that hardly anyone uses it, and hardly anyone is asking for it to be improved. So that time would be better spent improving things that there is lots of usage on.
There are multiple ways to interpret the same data. Your interpretation is a valid one. There are other valid ones all from the same data.
If your company sees the simple lack of feedback as a valid signal, then a feature with lots of usage but no feedback means you should not improve it but simply leave it be. It's used, so it seems popular and nobody complained about it, so it must be good enough. Build something else.
Of course your company may also view a simple lack of feedback as not enough signal. If a new feature that is half finished sees hardly any usage this can be taken as a signal that the feature needs to be improved. It's not useful enough in its half finished state to attract usage. Or your users may simply not be able to find the feature because in its half finished state it's too hidden and thus you have neither lots of usage nor feedback on it.
I find it so weird to look at the stock market to approximate this kind of metric. Stock prices have more to do with things like the ebb and flow of the risk profile of institutional investors, than with customer satisfaction with specific features.
Institutional investors care a great deal about metrics like customer retention indirectly because it influences customer lifetime value which then impacts profitability all of which is dependent on customer satisfaction.
Groupon for example has decent consumer metrics, except they couldn’t keep the business partners happy which resulted in their ultimate collapse. Finding weaknesses in companies businesses models like this can both be extraordinarily profitable for institutional investors directly and create an aura of competence to attract more investors.
Theoretically. But then sometimes they're just switching from equities to bonds or whatever. My point is just that there is no simple function from customer satisfaction to stock price.
There's a joke on econ twitter whenever there is a big move in some individual stock, that the explanation is that the move clearly happened because the expected value of future cash flows changed. The joke is that under the efficient market hypothesis that's always the explanation, but in reality we all know a bunch of other stuff is going on all the time.
I agree with your point in general, a stock doubling or getting cut in half doesn’t necessarily have any obvious explanations. However, stocks aren’t a pure random walk, they are somewhat bound to the underlying business even if you don’t have enough information right now to understand what’s going on.
So, the kind of extreme stock shifts like dropping to 10% of a previous valuation are much more likely to have an understandable cause even if the trigger is random.
I'm in a cynical mood, but I'll begrudgingly accept "somewhat bound to the underlying business" :)
I guess at the root of my skepticism is that so many "growth" stocks never pay any dividends, so it's unclear to me what the connection between the stock and the company's cashflow is even supposed to be. If a company never returns any of its profits to its investors, isn't it just kind of a gentleman's agreement to pretend that the traditional way of valuing the company's equity still applies? It really does seem to me that the market for many stocks has detached from the company's business, and is instead driven almost entirely by competing memes.
Companies are ultimately controlled by their shareholders, no gentleman’s agreement needed. If that price falls far enough corporate raiders are happy to chop up the company for a quick buck.
Not really though. They're controlled by their executives, who can theoretically be removed by their boards, but often with significant difficulty. And the connection between the board and the shareholders at large is also more tenuous than I once thought.
It's true that a dropping stock price can lead to a takeover and new management, but again, that could happen to both a well managed financially strong business that has just lost the narrative game.
That's the only point I'm trying to make, that in theory stock prices are driven by financials, but in practice it's a mix between financials and narrative, and I think narrative dominates more than I was taught back in economics classes.
Narrative dominates day to day, but it’s also very easy to overstate its importance. In a bound random walk the bounds and the randomness doesn’t have consistent impact. In the middle of the range randomness completely dominates what comes next, and at the edge the bounds completely dominates the randomness. That’s IMO a better model of these things.
Take say money, second by second the value of USD is determined by people’s perception. However, people in aggregate are required by law to pay a fraction of US GDP in taxes based on the value of stuff besides money, like millions of cars and cans of soup etc. That relationship means without printing new money the value of all USD in circulation must be enough to pay taxes with or you get the monetary equivalent of a short squeeze. Which then represents a bound unlike say cryptocurrency which can actually fall to zero even as the economy continues normally.
I bring up the tax angle specifically because it’s normally irrelevant but changes the behavior at extremes. People tend to think of economies as fragile things because even minor changes have large implications, yet Ukraine’s economy continued even in the middle of an invasion and massive migration etc. Stocks seem divorced from reality up until the point where fundamentals matter.
We'd need a lot more information about this one particular case but I'd bet almost anything that a 90% drop in share price was caused by something a lot deeper than customers being unhappy at some unfinished features. I intentionally didn't address that half of his comment for that reason.
That’s reasonable, though I am not suggesting a pump and dump is only going to show up in the code base just that it would also look like they described.
Having the data isn't enough; someone also needs to look at the data and draw conclusions. If it's an environment focused on pumping up metrics for the newest thing then this may not be happening.
It's interesting that people almost never disagree with this when they're the customers. We love seeing progress, even if it's a small portion of what we need. When frequent updates show that progress is slow, we're glad we're aware, and we don't blame the updates themselves for slowing the project down. In the rare cases where we feel like the process demands too much of our attention, we don't stress about it; we just say, "Looks great. Let's talk again in two weeks or when you have completed feature X."
Those hypothetical drawbacks only loom large in our minds when we're the ones doing the work.
What about those invisible people who gave it a try and decided there were just too many gaps and bugs? They are not customers anymore and probably never will be.
When the only way to get feedback on something is to go live to random customers, yeah, that's a difficult decision. I was thinking of situations where you are getting feedback on pre-launch software from a customer or from your own product organization, or you have an established product and some of your customers opt-in to beta functionality because they want the opportunity to influence your direction. I agree there are situations where the worse mistake is giving people a bad first impression.
I once worked on a team, in house at a company, where we were tasked with developing a prototype, in a scripting language with a well-known web framework.
Meanwhile another team at a consulting company, was tasked with developing the final product in java, all very enterprise.
The outside company ended up with a failed project, missing deadlines and going over budget multiple times. The prototype became the product.
Let me guess. Someone used JSF and hibernate in the same project. That is an easy way to blow up development time by a factor of 5x to 10x for literally no benefit.
If they find the appeal to authority useful, I'd grab them a copy of _Software Engineering at Google_ and leave them a bookmark in "Help Me Hide My Code".
This is interesting and I agree with it. I think the problem is that it's not necessarily irrational insecurity. In computer science especially people can be very pedantic and quick to call out an individual or shut down the work of someone who is learning but is not yet on the same level of skill. Now if nothing is being hidden and all code is made public there is a big window for such criticism.
The thing is that, often, a developer already knows how to make their code better, solve their problem or get rid of some unnecessary abstraction. But they just don't have the time to do that. They don't have time for perfection. So when their work then gets criticized pedantically even though they already are aware of all those things it must be a very frustrating.
I certainly know what it's like to have the urge to rewrite something and I may on paper already know exactly what needs to be changed. Having something you know is imperfect out there feels stressful.
> The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.
I experienced this today, just now. The whole org’s release for fall was hinging on what I was proposed… the relief of everyone once the presentation was over and it all clicked, when they saw the light at the end of the tunnel, that the path that I had uncovered towards that is reasonable, and tenable is priceless.
I understand now that I live for this shit. 10/10 would do it again and again and again.
It's also dramatically easier to have a discussion around what does or doesn't work with a concrete example. "This but with X" or "This but without Y".
I've found this also helps just internally - start with a prototype and you have engineering discussions that are more practical than theoretical.
> ..and this was maybe 1/3 of the effort I was planning for.
Quick anecdote. When just starting out I was building a tool to automatically find the composition of essential oils used given a GC/MS output, and was told it needed to be "fast". I spent a long time optimising it, working on parallelising it and more. I spoke to them after getting it down to maybe 15 seconds or a couple of minutes something like that and was getting ready to explain why I couldn't make it fast.
"So you said you need this to be fast to be worthwhile, but"
"[intejects] yeah being fast is important, if it takes more than a day or two it'll be a lot harder to use"
> speed is about 2 things: more smaller iterations, confirming you're working toward a desired outcome.
Well put. Too many managers and execs use the word "speed" but never define it for others to make actionable.
Scenarios in real life:
Coder: "what's important while we're working?"
Exec: "Speed." _feeling_cool_
Coder: "I authored features a,b,c,e,..x,y,z ~ already working fast, gimme a raise!"
Exec: _realizes none of those features were desired objectives_ ...
Or
Senior software engineer: It'll take 9 months, 5 devs, and $200k to run...
Everyone else: ... not even sure business will be kicking in 9 months ...
I think even talking about the draft gives you an idea about the severity of the problem. My experience has been the same. If the problem is severe enough a crappy draft will do. Conversely if someone is asking for multiple refinements even to arrive at basic decisions, I begin to write off the problem as not critical.
I really like the framing of "providing relief". I think it also makes sense to allow the definition of who experiences the relief to be flexible (e.g in some cases the audience that experiences relief might be developers, security engineering, sre, etc. not just customers...)
I'd rather name it an MMP - a minimal marketable product. It was good enough that it worked, and the buyer (in this case the stakeholder) was happy enough to "buy" it.
There's IMHO a slight difference between MMP and MVP in the perspective one takes.
MVP can also be minimum viable proposition which is a similar idea. An MVP of this nature can be a chat in the pub and an email order of whatever was promised!
Over my career, I've been surprised multiple times when I presented early draft to a stakeholder and they said, "oh, that's great, I've got what I need now"...and this was maybe 1/3 of the effort I was planning for.
The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.
Along these lines, in my work with stuck startup founders, I often ask, "if we were going to demo a prototype to a customer (in 2 days | at the end of the week), what would it be?"