This list is missing the most important one in my experience.
Stop trying to do so much.
I was on a team that really got good at pushing back on aggressive estimates. You wouldn't believe how much more we got done and the things we worked on were of higher value when we stopped agreeing to add more to our sprints.
It really helped that we got good at doing retros. Where we looked at what we said we'd do vs what got done, and saying, well, next time it looks like we can only agree to X% of what we think we can do.
Like, I know the phrase "slow is smooth, smooth is fast", it'd worked really well for me when doing physical tasks. This was the first time I'd experienced it for mental tasks as well.
As a solo dev, "slow is smooth, smooth is fast" coupled with ruthless focus and prioritization has resulted in the biggest increases in my productive output.
None of the OP’s tips for increasing product velocity apply to my company. The essence of product velocity reveals itself when N=1.
Focus on a few things.
Don’t rush, do them well. Say no to everything else.
Forgive me. I thought I was being clear that I was talking about throughput.
My team, when we started to get better at not overcommitting in any given period was able to produce more countable work. Not just more story points (inflation is super common) but more actual deliverables. It was weird, because we all felt like we were working less, we just thought we had to be honest about our commitments. But when you looked at the trend line at the end of the quarter, there was an increase in what we were able to do.
I don’t claim to know why this works, but it’s such an old phenomenon that there are lots of sayings that hint at it. I know that when I’m personally not overcommitted I have more capacity to focus. When I have fewer things on my plate, I context switch less. But I think something else happens as well. When you say no to things, it forces others to be more deliberate with their asks. They stop asking for low value things.
I'm on a 4 dev team (one of many). 3 of us commit to only do what we are pretty sure we can do. One of our team commits to way more than they should. Guess which of us is always carrying over stories and whose new functionality creates new bugs ever sprint? They never can wrap stuff up completely, they are like a tar ball of chaos. Unfortunately they are supposed to be our tech lead so all we can do is shake our head and watch.
Seems likely that a big reason for the increase was unseen, but cumulatively huge. When everyone is not rushing so much, fewer interruptions happen. Each is tiny, but the cost of context switching is cumulative and even nonlinear. You're benefitting from bending that in the good direction - reducing thrashing.
The first thing mentioned in this article can be summed up: minimize pauses. He calls it "eliminate dependencies" but it's really about finding out where work stops and needs to wait for something.
This is so important. It's one of the key tenets of agile (not only software agile, but real world agile too) as embodies in the TPC (Toyota Production System).
Lean software development has a lot to say about this too - check out anything on value stream mapping and if you haven't already, try "Principles of Product Development Flow". It's dry but good.
Yeah. Sleeping during a marathon defeats the purpose of a marathon. Which is to test your superhuman ability and seeing if you can run without resting or stopping.
It used to be common wisdom that for hyper marathons you needed to sleep. People thought that you can not run a few days non stop. And then Cliff Young proved that theory wrong. And now they run non-stop, at least for the Sydney to Melbourne Ultramarathon.
Lots of good points in this article. One big factor I didn't see mentioned is cycle time when developing or debugging. How long does it take to run some code to test it or reproduce a bug? This is especially relevant these days since a lot of modern architecture trends like serverless, k8s, or distributed microservices can be much tougher in this area than a more traditional monolith.
There are workable approaches but they often take some legwork to get working well. Having an easy way for any engineer to quickly run and test code, especially when jumping into an area they are unfamiliar with, can save an immense amount of time.
Yeah, I think container-based dev envs (devpods / codespaces and the like) are going to make more and more sense for a lot of companies, with "tighter feedback loop for interacting with prod-like env" as the primary reason.
And yet, I think all us engineers know that time matters immensely for the business. There has to be some balance struck, otherwise some engineers will build crystal castles that never ship or take months for relatively simple features.
It depends on the definition of "relatively simple features"
Mismanaged startups can have a lot of technical debt as a result of pivots, shifting priorities, weak engineering and an emphasis on being feature factories. As a result you can get to a point where even a relatively simple feature takes months because of massive technical debt.
Shipping early and often while giving engineers the space to tackle debt is important. Good hiring pipeline for competent product focused engineers can help mitigate building castles in the sky. That and/or having a good CTO.
Excuse my facetiousness, but I don't think time matters at all for businesses. I don't think I've ever been in a spot where time really mattered. It's always been about people's perception of time mattering. And by people I mean upper management.
Realistically, if you axed everyone except counsel from the executive team, your business would survive. If you have paying customers, they will continue paying for the product, regardless of whether X or Y feature that customer research (and championed by some goon) says is a !MUST HAVE!
The only time time matters is in startups, where you will run out of money until you can find a sucker to give you more (whether that be VCs or customers!).
Time matters for every endeavor because in the end we're all dead. Every single thing I've done at a company, time mattered. Paying customers churn. Competitors come to market with features. Employees get recruited away.
All of this happens as time goes by without you doing anything.
I have. And never has the urgency been warranted in retrospect. It's always been a reflection of the anxieties and worries of the CEO/president/whoever is in charge, than it has been reality.
It's an unchecked personality disorder at that point; and I want no part of it.
business people need to be able to make decisions and they do that based upon timelines.
If you're a small to midsized company these decisions have larger effects on the overall health of the company.
I've seen too many instances of technical people abusing business ignorance of technical details. Most of _my_ experience has been business people trying really hard to work with the technical groups, but the technical groups just don't want to be held to any timelines whatsoever.
You can't have that in the business world. That doesn't mean all deadlines are hard, but if there isn't some timeline then business decisions may as well be made by throwing darts at the wall.
Urgency is different that shipping small and often.
Best team I ever worked on consistently shipped at a very high pace without needing a false sense of urgency. We worked hard upfront to cut scope and ship only features we thought were valuable.
I think urgency is a management foot gun more than anything else.
I've had bosses where everything was always on fire and extremely urgent and needed to be done the day before yesterday. The result is that you don't take what they're saying seriously, and when something actually is extremely urgent, you're like "yeah I'll do that next week when I'm finished with all the other urgent tasks you've given me"
Beyond that, constant panic mode is also incredibly shortsighted and a recipe for crippling technical debt down the line.
Yeah, we can use that World War Two general's (Patton?) important-urgent matrix.
Any object (a task to get done eventually) gets assigned a vector of two components: important and urgent. Things that are both important and urgent will be in one corner of the geometric graph visual thingy. Things that are the complete opposite type of that both important and urgent object type are in the opposite corner, which is the corner of the not important and not urgent.
The other two corners (important but not urgent and urgent but not important) beget the need for a measuring system, to make the whole model useful. Maybe encase the vectors in a Gaussian distribution where each object's vector's components are scaled, measuring, representative, or simply numerical values? Like, for example, for a new task to do, add a new object whose vector is 0.5 important and 0.1 urgent.
I don't know what mathematics and analyses this setup will imply. So any mathematician or numbers nerd is invited to help further develop this abstraction.
I've worked with toxic teams and I've worked with productive teams. There can be an overlap, but one does not indicate the other. Working with respectful like-minded highly-motivated and capable people and building things quickly is the opposite of a toxic workplace.
There's a massive difference between "productive" and "urgent". I've worked on teams that were constantly in a hurry, I've worked on teams that got a lot done, and there's been basically no overlap between the two.
I think the key, as it is in most cases, is the "why". Sometimes there really are urgent deadlines that you really need to hit. Imagine you're building an app for the Super Bowl - if you miss that date then the entire thing is a waste. Other times the deadlines are made up or self-inflicted.
Understanding the difference and communicating which is which are one of the things that make a good manager/exec. If you're constantly telling the team "this feature needs to be done yesterday" you'll lose their trust very quickly. If you're good about saying "this thing is important and we can forget the rest" or "I know we don't do this much but this date is really important for <good reason X>" then you really build trust with the team.
Most engineers are smart enough to understand when dates are bullshit and when they aren't. If you push a lot of bullshit dates then you won't get the good effort when you really need it. If you don't trust the team to work hard without bullshit dates then you should find a new team.
This is a great article. I wish it had author attribution though - it's credited to Stay SaaSy and this page https://staysaasy.com/about.html says "we" a lot, but I much prefer to know the author of a piece rather than see it credited to a company.
For those of you who have been on well-functioning teams that use stories (like Jira) and PRs, I'm wondering how you incentivize small PRs.
Our org traditionally estimates a product-oriented story with points, and then had one feature branch, which goes through QA and merge, at which point the story closes.
I think this causes all sorts of distorted effects, in that it simultaneously causes larger PRs, and also disincentivizes responsible coding practices that could grow the PR even more.
As these stories are product-focused, it's hard to break down the story into smaller stories, as this involves anticipating the tech work before starting, and makes it confusing to product folks if the stories start getting less product-focused.
I suspect it would be better to allow multiple PRs per story, or to otherwise push back on product? Should stories always be sized to accommodate one small PR?
Depends a lot on the domain. In some domains it’s possible to slice even product stories super small. This usually takes a lot of skill from the person writing the stories. But if you can get the typical story size down to “about half a day” it handles a lot of this problem.
What I usually do instead is multiple PRs per story & hide work behind a feature flag or something until it’s ready to show people. This doesn’t require a fancy feature flag framework or anything, just a hardcoded Boolean or code comments.
> I suspect it would be better to allow multiple PRs per story
Yes. Stacked PRs are one way to accomplish this. Another would to have the dev write smaller tickets for PRs and then PRs are attached to tickets from the tech team.
Anything that relies on product understanding how to write a ticket that is the right size & scope for the dev team to create good PRs is doomed to fail.
- Have engineers write most of the stories, as they are better at slicing things down to PR size.
- When product writes the stories, make sure we align on engineering approach as part of the pointing process. If something should definitely take multiple PRs, we call it out then.
- Heavily use feature-flags so that it's easy to have more than one PR for a story, as deployment to prod and release of a feature are decoupled
- Have a culture where it's totally cool to ask someone to break up a large PR into smaller parts.
Stacked diffs sound like what I sometimes do with PRs, where I branch off a previous branch. merging/rebasing from main would cascade through. It's a drain on QA though if they feel like they have to manually qa each PR, as opposed to testing the entire bundle of PRs at once.
My advice to you (at least what's worked for me on several reasonably well-functioning teams using stories and Jira in a similar manner as you have described:
Decouple users stories (customer/product outcomes) from tasks (units of work needed to achieve those outcomes). Jira is designed pretty well for this, since you can have sub tasks attached to user stories.
This works better when your user stories _are_ actually defining outcomes -- for example when you have stories like "User admins can filter jobs by category" and not "Build a category filter for the job search." The first can usually be successfully defined by a few acceptance criteria, whereas the second starts to get weird since you're focus is more on what it will take to build the thing vs. what is the result you want your customer to see.
With your outcome defined in the story you can define any number of implementation tasks it will take to achieve it. If you're a cross-functional team and you do "vertical" instead of "horizontal" splitting then you're sure to have a few coding tasks on the front end ("add the filter component to the search bar", "update the backend client to pass the category id as a query parameter") as well as a few on the backend ("update the API to accept the category parameter", "add the category param to the repository query service"). You probably have some non technical tasks too ("update the help documentation", "update the OpenAPI spec").
We almost never (save for absolutely trivial user stories) have a PR attached to the story but instead have PRs attached to sub tasks. If you're doing trunk based development with continuous deployment and feature flags, you can and should be shipping many PRs. Just yesterday I was in the middle of a task at the end of the day and decided to cut the PR where I was and split the task in two on the fly, since it was easier for me to ship the code like that and easier for my team to review it.
We do story grooming and estimation and all that -- but only for user stories. Tasks are the domain of the humans doing the work and they are meant to be flexible and even disposable. We usually have a session at the start of work on the user story when the engineers working on it align on the solution and then break the work down in to sub tasks, but these naturally evolve as the work progresses. I should had that multiple tasks invite collaboration instead of one story per engineer.
Lastly, I've found you have to preach the virtues of small PRs to your team and usually convert a few stragglers who don't see the value. I try to practice what I preach (i.e. by keeping my own PRs small) and also make a big deal out of it in retrospectives -- i.e. point out how painful the review process is with large PRs, usually entailing many rounds of comments and changes -- so that people quickly become believers if they are not already.
As a last point I try to encourage the value of "PR reviews are your top priority at any given moment" since every second a piece of code sits unreviewed adds to your team's cost of delay. There's a virtuous cycle here where smaller PRs lead to less painful code reviews lead to greater willingness to spend 10 minutes (vs. an hour) doing a code review, which helps really get PRs moving through.
I think PR stacking is great but I also find it's not as important if your PRs are getting reviewed and approved faster than you can write the code for your next PR.
(I didn't realize I'd write so much here, I forget that it's actually kind of a big topic that's built on a variety of different practices that all start coming together at some point when you get in a groove.)
Thanks! How do you integrate QA and merging with that? Do you QA and merge each subtask PR independently? Or do you leave all subtasks unmerged in favor of QA'ing the entire story, at which point you merge all PRs?
This is where feature flags come in. When you use feature flags to support development you ship code straight to production — but your work is hidden behind that glad while development is under way. So in practice you merge your PRs immediately to master (and deploy).
Essentially you’re decoupling release from development here. This supports any number of QA practices. (We don’t have dedicated QA at the moment and instead have a biweekly “mob QA” session where we do a group deep dive into our current work.) We will capture most small fixes and improvements as sub tasks on the appropriate story (or file a bug ticket if the story was already done and we discovered a new issue.)
As a result of the above we don’t use long lived feature branches which become painful and slow, process wise. We just merge immediately after review. (Unmentioned but this is of course supported by automated testing and continuous deployment.)
Thanks, this is super helpful. For us, we are pretty heavy with manual QA, and for work that can't be wrapped in feature flags, we'd still have to test a fair amount before merge, but at least it wouldn't have to be for every PR.
For post-release QA, at what point do you close the story? After all the subtasks are merged/deployed? Or later, after QA is done and they turn on the feature flag?
What a great practical article! I deeply resonate with the incident metrics section and the questions you need to ask yourself - I'd add another...how long does it take to get it running on your local machine?
If you can nail this it massively reduces the friction to someone mentioning a bug has occurred to you reproducing it and then deploying a fix. If you need to run 10 micro-services to work out why a button isn't registering a click event, that kind of stuff is going to sap the energy of your engineering team and trade-off isn't worth the effort.
One tactic that worked really well while I was a coding engineering manager is to have engineers in opposite timezones that don't mind sharing git branches.
We have a small team of 4 engineers (1 in Seoul, 1 in NYC, 2 in Brazil) + myself (CA) and we shipped a brand new product line with paying customers for our startup in a few weeks (subsequently added more features). I correlate that strongly to us having a 24hr engineering cycle and us being willing to pick up where others left off.
The small team acting as a startup really resonates with my experiences.
Reminds me of the Carmageddon[1] post-mortem where they noted that having management in London and all developers in Sydney was perfect as everyone was at work at the same time...
Haha I (Europe) once worked with an American company who had the development center in India.
My experience with that setup was - quite the opposite to say the least.
We did not have direct contact with the Indian team so we had to speak to the US team to ask them to do stuff in India. But if they had questions... ... ... ...
Have small PRs, pick good tech stacks and stick with them for all your products (I’d recommend using TypeScript / next.js / tailwind.css / node.js), don’t overly write automated testing for things that don’t have revenue yet, and fire anyone who doesn’t get work done or lacks alignment.
The title would better represent this approach by appending “…at any cost.”
This isn’t a judgement, but clarity of intent can help everyone make decisions relevant to the effect that such an approach would have on them. Elaborating on risk is usually conveniently omitted, because it gets in the way of reality.
I'm pretty sure you're just joking, but in reality, this would probably increase velocity by a bit at start, only to grind down to a negative value within a year.
My top X whatever ways to increase product velocity:
- Stop wasting time estimating
Planning poker is like kindergarten, there is a parent figure asking the children how many candies are needed to build a tree house... and nobody does anything useful with these "velocity points".
Things will be done when will be done. Business types nosing around and trying to pressure the devs into providing an estimate they that's more to their liking is more disruptive than helpful. Guess what? Things will still be done when they are done. Just throw the estimates out of the way.
- Throw standups out of the way
Or at least throw the non-technical people out of the standups. Nobody benefits from those but micromanager types or useless middle manager personas like Product Owner and Product Managers.
- You know what, just drop all the scrum nonsense
It's busywork for no gain. It's loved only by people who can't contribute to actually building anything, so they need something to do to justify their salary. Drop this nonsense out and see delivery speeding up. Treat your developers like grown ups who can achieve their goals without a fresh faced college BA graduate who happened to have done the 1 week of Scrum training.
- Tests are important
Don't go all cult-y on TDD or whatever. If you add a new feature, at least its happy path has to be tested, preferably with as little mocking as possible. Spawn that database, spawn that redis or whatever, make that http request. It will make future changes much easier.
- Limit concurrent work in progress. In a team of five or six, don't allow more than 2 big tasks at the same time
Give your guys the ability to team up on tasks and do them well until they are satisfied. People teaming up and doing common tasks is invaluable in creating a shared context, reducing the bus factor, and building inter-team-relationships.
If some sleazy management types are sniffing around like vultures trying to pressure your team into taking on more work in parallel, show them the door, and if they keep insisting, give them the boot.
- Invest in good monitoring
You gotta know who uses your shit, how they are using it, and what kind of volume. Every usage feature gotta have a counter and some extra tagging for context. Every rest call needs to have counter, latency, and extra tags. Have a dashboard per app, preferably maintain the dashboard/monitors with some infra as code like terraform. If management insists that there has to be a PO or a PM, give them access to these charts, and show them the door for everything else.
- Don't over rely on clients feedback, especially if you think you know better.
If you have feedback buttons and you are getting some suggestions or whatever, don't over-rely on them indiscriminately. Most humans are dumb, and half of them are dumber than that, and their feedback can often be senseless. Have Have a damn vision for the product you are building, and the spine to stick to your vision. Politely disregard any features suggestions that you don't think are fitting to the overall product vision. By that, I mean the devs make the decision together, not some PO.
> - You know what, just drop all the scrum nonsense
I pretty much agree with everything you said, aside from this. I think retros are pretty good where the team can come together and say why certain tasks too long and figure out solutions to those. But rest of the scrum stuff is pretty garbage, especially cross team demos.
> Create a culture that favors begging forgiveness (and reversing decisions quickly) rather than asking permission. Invest in infrastructure such as progressive / cancellable rollouts.
What a terrible idea. How do you make sure no one breaks the product on friday evening? Especially with non-reversible db mgirations or other funky things?
It’s pretty easy when the people who break stuff also need to fix it. Nobody wants to ship on a Friday just for the sake of shipping. They just wait until Monday.
I think it partially depends on what you are willing to forgive. For example, I am much more willing to forgive eagerness compared to carelessness or impatience.
One thing to consider is that moving to a culture of "permissiveness" won't stop carelessness. In fact, it helps to embed it. People learn that they can offload being careful onto the one giving permission.
Over time this leads to a worse situation since you end up having a lot of people who feel it is OK for them to be careless and you end up with one person (the one who can give permission) bearing the entire weight of carefulness for the entire org.
Point taken! I have yet to find a path to distributed ownership in my team, but I lack the right resources imho. Can't go into detail, but we have special issues that are probably not widely shared in other startups.
I'd love that. And I'd love employees who take ownership and have a vision of more than a meter. Truth is, as a startup that nobody knows (stealth mode) in a tough financial market you're not attractive in Germany (and it's ridiculously competitive already). I hope it get's better with more public visibility.
How do you make sure the gate-keepers don't do exactly that?
Because gate-keepers also fail once in a while. And the same things that help deal with their failures can also be applied directly to deal with failures from the developers.
It's a robust and very counterintuitive finding that quality verification of human processes do not increase the final quality. You still need quality control, but people will adjust for any universal checking you add to their work.
;-) Yeah, I do get that, but when there's no promotion of personal responsibility, chaos reigns. When everyone does wat they want to do, or what they think is necessary, nothing is completed. This is from experience with different companies, organizations, teams, projects, people. Of course, nothing is perfect, but without some structure, there is only communication mayhem ;-(
Stop trying to do so much.
I was on a team that really got good at pushing back on aggressive estimates. You wouldn't believe how much more we got done and the things we worked on were of higher value when we stopped agreeing to add more to our sprints.
It really helped that we got good at doing retros. Where we looked at what we said we'd do vs what got done, and saying, well, next time it looks like we can only agree to X% of what we think we can do.
Like, I know the phrase "slow is smooth, smooth is fast", it'd worked really well for me when doing physical tasks. This was the first time I'd experienced it for mental tasks as well.