In my experience on small teams that I've been on - a key motivator for people is shipping products and features. The sooner and more frequently you can do this the more positive reinforcement there is for the entire team - not to mention reduced risk of something never making it to the target users.
Teams that have figured out the optimal way to get product and features into the hands of users as quickly as possible are the happiest teams because it's evidence that they're working together efficiently to meet their shared goals.
A big condition for motivation though is if people are not pressured to rush features.
Being in a rush does not work long term and not being happy with the quality of the work done is very demotivating and impairs the sense of meaning. Micromanagement too.
Personally, the two week sprints and daily stand-ups did this to me in my previous job, but I can see they could be a source of motivation for others. So, how work is organized and managed is really important too and depends on the people in the team.
I hate having to ship features when I just want to work on improving reliability, doing upgrades, improving DX and clearing all the crap out of the way that's preventing us from otherwise shipping.
For sure. For me it's about ratio's though. It never makes sense to me when I encounter situations where 100% of capacity is dedicated to features, features, features when sustained velocity is in the toilet when it feels like we're standing in an entire orchard of low-hanging fruit of productivity gains to be had. I usually find that after about a year or so in a company it starts to click for people that if they just let me be more self-directed then stuff starts to improve for everyone at a faster rate and it's win-win. I know there are other devs like me, but I feel like it's not widely recognized that we exist and should be enabled to just do our thing. I don't know if you can relate, but maybe you can?
In this context, a completed upgrade or cleanup would count as "shipping".
The article's author seems to primarily emphasize serializing processes so that incremental progress is readily evident to everyone and the organization can reap early wins from the first-shipped results.
Shipping an 80% solution is ok... it leaves a bad taste, but its "ok".
The problem becomes more-so when the next 80% solution is shipped on top of the previous 80% solution. And that just gets demoralizing after a time when one looks at the stack of 80% solutions on top of 80% solutions that needs to get fixed up (but "we don't have time to do that").
One thing I tell our developers (and POs) is that we can only tolerate 80% solutions so long as they also produce 120% solutions (or give us additional time/scope for them) at approximately the same rate.
I don't think it has to be even, but there is certainly a ratio.
Sometimes you have to let developers overengineer something or work on something a little too much so they don't go insane and take you with them. Or quit and go somewhere else.
Few things affect consistent delivery as much as bad employee retention and burnout.
For me deadlines are simply "someone wants this and this delivered @" I work on it a couple of days and then I tell that someone if it's reasonable and if it's not reasonable, what they are gonna get instead.
Most of the times deadlines are way off the mark and it also happens that they take 1/3 of the planned time, in that case instead of over engineering, I simply rest or study something new.
Over engineering is a curse only if one can't stop doing it and needs an external stimuli to mark the feature as "done".
Otherwise I believe most programmers know when their work is finished and can be shipped.
Sure! I currently work at a place where there are deadlines and no sprints. It works very well for me, and for the company too apparently. There are customers who expect their stuff to be shipped and the product has a month by month roadmap.
Now, over-engineering is a risk but I think I'm lazy enough to avoid it most of the time. I probably have the opposite issue sometimes.
We do get things done without running for a deadline most of the time though, so it works okay without deadlines for the some non negligible part.
We'd be dead if we didn't release. There isn't any investor's money to compensate.
What early Agile knew and Scrum has obliterated is that basic advice you got from your high school or college teachers who pointed out if something is important you can't leave it until the last moment.
The counterargument about deadlines is always some sad sack story about shipping for the holidays or having tax software done in time for the start of the tax season. Those stories are all true, but they have fuck all to do with deadlines.
Continuous Delivery means that if you need something by Friday, the conversation is about the relative risk/reward of shipping the build we're going to do on Tuesday, or the one we did last Tuesday, or one from three weeks ago. Not whether the important features will be done by CoB on Friday. Because the critical bits of the important features were finished over a month ago and now we're just making everything pretty.
What happens is that "the work expands to fill the time" is not just a law that applies to developers, it also applies to management in spades. "Sure, we need this feature, but why don't you hold off on doing that while we do this other thing that feels important but only because I told someone it would happen and I'm too much of a coward to tell anyone that I was wrong about something, and I outrank you so you and your social life are going to pay for my mistakes, not me and mine."
The problem is that deadlines are often too tight because the person responsible is usually not the one who suffers the most. If this is the case, the result is low code quality and bad feelings because the developers are either overworked or feel guilty. Instead, it is best to create a priority list of tasks and review it regularly.
I agree deadlines are a necessary evil to ensure we hold ourselves accountable to both customers and internal non-eng stakeholders who need to know when things will ship in order to make plans around them and collaborate with us effectively.
One approach to develop a healthy culture around deadlines I've been thinking about lately:
Plan the deadline around the minimal lovable product, while spec'ing out the minimal shippable product, and ensure there's enough of a delta between the two so we can have a large degree of freedom w.r.t. scope.
Without this freedom to vary scope, _when_ (not if) our estimates invariably fail, our only options would be to extend the deadline (defeating the purpose of setting deadlines in the first place if we resort to this often enough), or to burn ourselves out with overtime work in an attempt to meet those deadlines (obviously results in an unhealthy/unsustainable environment, and not even guaranteed to succeed).
I've worked with (and been a) staff engineers who have fallen into the same trap. Over engineering is something that we need to constantly be cognizant of because it's often dependant on scope.
E.g. are microservices over engineering? If you have 1 user now, and only expect 100 total users over the next 2 years, probably. If you're running AWS, definitely not. Everything in between becomes a grey area that requires thoughtful architecture to decide on the right approach, and even very experienced people will make the wrong calls
What motivates me is having a clear large goal (like say, adding a new core functionality to an app), and then break it down into small tasks that can easily be done in regular days of work.
You end up with an ever increasing list of "completed" tasks until you eventually reach a shipping point and you can have a celebration party or whatever.
The hard part is then managing deadlines and expectations from then on with whoever wants the product, where good managers can make progress feel very present and visible, which in turns makes speed feel predictable and everyone can plan with realistic expectations.
What kind of micromanagement did you experience / E.g. do you have an example?
I'm in the position where I as tech lead don't fully know if I can trust my team (yet) to do the right decisions. One of the reason is that our product is quite new, and we haven't discussed developing principles yet. And some people tend to over engineer stuff all the time. But I also don't want to micromanage.
Having to report every day during the daily stand up felt like micromanagement to me. But that's just me. Many people do seem like to have this kind of daily routine so your mileage may vary obviously.
Having to fit tasks in two weeks periods was a problem too, with all these ceremonial meetings where we end up having to make things up and which actually take a lot of time. Being able to give feedback is good, but I don't like to be polled every two weeks on this in a far too long meeting, and waiting the end of the sprint if something is actually wrong is not ideal neither. We ended up merging shit code if the tasks actually needed more than two weeks and artificially splitting those tasks into smaller ones is a lot of overhead and can have deleterious effect on the code architecture. I think we were also not very good at planning a bit ahead so decisions needed to be taken and validated by the boss too often, which can feel like micromanagement too.
Trusting your team about taking the right decisions is probably not an issue, if you have meetings where you make the general "big picture" (design) decisions all together. It'll probably help you notice that your team can make the good decisions too. But if you do have the big picture, it's a good thing you participate in this. I think small decisions should be left to the developers though.
Not trusting they will do their job would be an issue.
> Having to report every day during the daily stand up felt like micromanagement to me. But that's just me. Many people do like having this kind of daily routine so mileage may vary obviously.
Same here. I used to be in a team with daily stand-ups, reporting to the manager what work I did the day before, and they killed my will to work, consequently causing me to work ~3 times slower, which presumably is the opposite of the intended effect.
The problem is that a lot
of developers (especially junior ones) make shitty decisions and/or just work slowly without the accountability of having to say what they worked on (and having to explain why what they said they will “finish today” 4 times already isn’t yet finished for the 5th time).
They are not necessarily bad developers, just need the need the external motivator.
If you have 10+ years of experience and you repeatedly make shitty decisions or keep drastically underestimating your work instead of introducing accountability via daily standups you can simply be fired.
In hindsight I think it is much more effective to pair them up with a senior developer instead of waiting for a pattern to become apparent during the daily scrum.
I'm not against stand-ups per se, just against daily stand-ups. Currently I'm in a team which does three a week. That's at least tolerable, but if it was up to me, I'd opt for once a week. Daily made me feel that I had no agency over my work, that everything to the tiniest detail needed to be negotiated with someone, most often the manager. Less agency means lower engagement in work. At least for me. I do know that there is plenty of other people who are different.
My standup messages are mostly "I'm working on adding feature X, it's going well", or "I'm fixing bug Y, and I wonder how to handle Z".
Usually that's it. Sometimes there are questions or discussion about details. It serves to keep the team aware of what's going on.
If your team is argumentative it can drag out and be a drain. That's when it is up to the manager to break it off and move any needed discussions somewhere else. If your manager is the argumentative one... you have a bit of a problem.
These stand-ups get in the way. I'm bored when I need to listen to what people have to say, or worse, what they are making up. I'm stressed by what I need to say or make up. They take time. They often happen at a time where my productivity would be the best / when I'm in the middle of something, or else I need to rush to get on time. Or to watch for the time when we are close and interrupt everything I'm doing. This, every single day. It's fine for meaningful meetings solving real problems, but stand-ups are not this kind of meeting for me.
I very much prefer not having a daily synchronization point, and instead give status to relevant people when needed, or give my status when asked for (which is not too often, we can see what tasks is left for me in the bug tracker). If I'm blocked or if I have a question, I'll ask. If colleagues have a question, they'll ask.
We have flexible hours, not at exactly the same timezone, some people are already full of important meetings. Not having a standup to babysit, to schedule, to watch for and to be stressed about is one less problem to handle.
I'm thrilled to be able to start my work day when I'm ready and not having to interrupt for this daily thing that does not seem to bring much value in the end. I'm happy to be able to have an unproductive day and make up for it the next day without nobody noticing it and without me lying about it, because in the end, it's none of nobody's business and it doesn't matter. And the day is just a bad unit of time for development tasks most of the time.
When I had to attend daily stand-ups, indeed standing-up (wtf!), I just had the impression we were (treated like) a bunch of children not able to be autonomous for a few days.
But then, it seems everybody in my current team is wired for working efficiently without this kind of things, so it works for us. We are also good at knowing what people are on to by the reading the stuff they are discussing on the chat, and the big picture, fast, weekly status update we have anyway. I don't need the details brought by the stand up, unless I do but then I will get them via efficient communication anyway.
> Having to report every day during the daily stand up felt like micromanagement to me.
Agreed, and some teams go even further than that. I have to update statuses multiple times a day. If I don't, I get nagged about the status of this or that (as though I can do more than one thing at a time). I wish I could just say "I am doing A because I am waiting on B just like I was yesterday" and have it stick.
I almost want to make that painfully hackneyed “did we work together?” Joke because this was my last job. It was status update overload.
Standups daily, then we’d have weekly all hands that I had to prepare status updates for, then a weekly 1:1 with my boss that needed status updates, and then each week my team would have to type out what we accomplished at the end of the week so the Director could send it to the execs for the weekly reviews.
The result? I am giving the same status updates to the same people upwards of four to five times in a single week. There were moments when I legitimately wanted to ask my boss in a very flippant way about his note taking abilities but that wouldn’t have done much but get me in the management dog house probably.
It was a source of annoyance with every other dev I talked to about it.
That's a very valid and shared perception of DSMs and even Scrum in general.
For some types of projects, and for some people, it just does not work.
And when it is nonetheless forced onto them (because for some others it works, or because company policy/management dictates it) it is actually failing and working against its own principle: agility.
Can you prove that their "overengineering" makes things worse? And not just in the short term, but also for maintainability down the road, accounting for their developer experience and job satisfaction, etc? Otherwise maybe it's just a healthy level of engineering, knowing the details that person knows.
The management advice I was given is that if you don't first trust people, they will never get a chance to show you that they are deserving of said trust. And showing that is allowed to take some time.
Another thing I try to keep in mind is that I might not always trust each individual in their decisions, but I always trust a team decision over my own. So when in doubt, involve more people on the team.
Of course you can't prove that these four extra layers of abstraction will never be useful, but they aren't right now. That's why people with over 10 years in the industry are valuable - they have the intuition to see this coming.
The dev took twice as long to build the feature as they needed to, and updates to the code also take twice as long. I have seen this over and over, and GP is correct - there are some engineers who need to be coached out of overengineering.
My point is not that overengineering doesn't exist. My point is that if Alice says Bob is overengineering and Bob says Alice is underengineering, you don't have any evidence either way. You need to loop more people in and let both Alice and Bob air their concerns.
Had the exact-same experience in a previous role: pressured to rush to release to meet a(n arbitrary) sprint deadline and then - the deadline not met - the work left unreleased. Super frustrating and almost caused me to quit at the time.
Not only shipping, but ideally shipping and receiving feedback about it. Feedback shows people care. It's not really motivator to ship something that people don't care about, or worse, don't use at all. It's shipping useful stuff that motivates people.
Yep. And if nobody actually cares about the feature like everyone thought they would, great we didn't waste time. If they hate it (usually through feature flags of product trials), and work needs to be done, simply flip the flag back off.
No it's not. Everyone who loves feature flags has never done serious work. The amount of extra work / redundant code that has to be done to accommodate different DB schemas / DTOs in addition to the "lol feature flag" on the front end is absurd.
Note "serious work". If you are hiding a banner, an extra button that calls a service method used elsewhere, some text, etc - feature flag away.
Real world problems tend to be, at the very least, "add some new fields to the form" - at which point you are messing with DB schemas OR following bad practices like making fields that should have no business being nullable accept nulls because "who knows if the feature flag is on". This quickly destroys normalization and data consistency.
If you want to do rolling/canary deployments without downtime, or gradual rollouts to subsets of users, or a/b testing, you will have multiple versions running in parallel against the same database. Plenty of "serious work" happens that way.
Same problem. Someone I used to work with wrote an article about it and you run into the same problem with A/B tests / canary deployments / etc - need to account for different DB schemas:
This is the thing people miss with Agile processes. Agile without shipping is merely wasted churn. Nothing is more demotivating than wasted churn. Agile requires progress. How can everyone learn and re-evaluate if you always stay in same spot but just keeping turning in circles?
This is why when leading an engineering org I narrowed down to one metric:
Committed to Completed Ratio
It encourages completing things and only committing to what you can complete.
It's legible, easy to understand by engineering, product, and management, pointless to game, and positively reinforces completion. It's quite difficult to accurately predict when a complete feature will ship, let alone a whole product. But, if one breaks down a feature into its legible constituent components and then commits to completing just what is within that (which also necessitates any required communication to arrive at understanding the requirements) then over time you can get quite good at predicting what you will be able to get done in a sprint and better at only committing to what you believe you can actually complete.
It is a rookie mistake to think that programming is splitting a big task into smaller tasks that are splitted into even smaller tasks, and that once the process is done, you can use mindless drones to complete these microtasks.
It assumes the perfect knowledge, the fixed work volume.
In practice, you don't know how much work might be required (a small task may blow up). It is inevitable that sometimes you have to drop features to ship in time. Programming is an iterative probabilistic process.
It is also a rookie mistake to think that just because some tasks might blow up, you can't accurately estimate a large class of them to within a reasonable confidence window (say, within one day 99% of the time), and likewise identify those tasks with a large chance of blowing up. Breaking down tasks into smaller tasks seems to help build this skill faster in some people.
If you don't want to be a mindless drone, you should also see the need to practice skills beyond just cranking out code.
It may also be a mistake to assume that it's always possible to break down a system into logical parts and simplify that system until it fits your complexity budget prior to development without knowing the upper bound of your problem's complexity.
Some things still have to be discovered before they can be measured.
Your job as an engineer, broadly, is to bring order to the disordered. You only rarely have enough time to fully understand the problem before trying to solve it.
The real, fundamental mistake being danced around here is the failure to recognize the inherent tension between product and engineering.
Stop fighting, start harmonizing. Understand how you fit into the bigger picture, and “deadlines” vs. “complexity” will start to make more sense.
It’s obvious that you aren’t going to make money off of nothing, so you have to build something, and it’s obvious you can’t build something out of thin air, so figure out how to build small, specific things that people will pay for.
While I agree that my job is broadly to bring order to the disordered, I would prefer if the disordered accepted that my proposed way of doing things will eventually achieve order without doubting me every morning, as it is after all my job to find the optimal algorithm to order things since I have myself optimized myself to find just that.
I don't want to fight, I'm simply unable to process your disorder in real-time.
Ah, you've already fallen into the trap; it's not about the "optimal", it's about the "functional".
Very, very few people are paid to find the "optimal" anything. The trust you're looking for can be found once you recognize what it is you're actually being asked for (again, rarely 'optimal', just 'functional').
We just got started and while I still can't find any obvious deficiencies with your thinking, I find you're investing far too much energy in telling me how I should work and not allocating nearly enough energy into describing what your problem is.
A standup is not "doubting you" and five minutes once a day is not "real time". It's rather the opposite; everyone is trusting you to raise issues when relevant, and making space to do that, rather than letting anyone interrupt anyone else anytime they think of something.
It can go wrong if you have shitty teammates, a detached PO, a selfish team lead, etc. So will everything else.
If it's really 5 minutes a day and if you're really satisfied with every monkey nodding once, then sure, this ritual can be accomplished with a moderately-sized team. But please provide data that shows that this has ever been achieved organization-wide anywhere with more than 10 employees. Seriously. I need to know.
Otherwise stop polling devs once a day when they already said the earliest you'll get anything is 2-3 days. They literally will quit over this in the long run and it's the simplest thing you can do to stop losing devs over communication issues between your team members.
They cost enough per head and you want to literally start their day with a reminder that they've unwittingly joined a cult to pay the rent?
Our 7 person standup takes about 30 seconds if nobody is blocked or has any questions. Most days it takes longer because most days at least 2-3 developers want to say something.
If your team lead (or god forbid somehow a PO is present) is lecturing or questioning individuals to report about specifics during standup, I'm sorry you have a shitty boss.
What's the point of having a standup for devs while at the same time assigning a manager to study and organize their JIRA entries? Wouldn't it be better to simply route all notifications through the manager, 7 to 1 (or 6 to 1 if you can find a dev who can reliably manage the communication network between your devs) and then just route the information into JIRA or upstream towards the bosses?
I've never had a case where it made sense to wait until the next day to bring something up to my organization and I've never been to a planned meeting where someone didn't get abused by management for only bringing up the issue at that time.
But it doesn’t need to be possible all the time, just often enough to set bounds acceptable to the rest of the organization. And how are you going to know unless you’ve already done it a hundred times, and/or try?
The problem with acceptable bounds is when they're applied uniformly to all your code monkeys without adjusting for experience and mental state at any point in time and we can't sample those values in real-time.
Performance metrics are often too noisy to be useful so proper bounds are too difficult to set without the "experience" of seeing the team work over a very long period of time.
Predicting the future of a high entropy system is predicted to always be hard if you're aiming for 99% accuracy. Solving for hard problems AND solving for this particularly hard problem on a daily basis is less energy efficient than if you stop trying to predict the future at every standup and trust that your average best guess while ignoring the future is good enough to keep you going until the end of this task.
TL;DR: Spend less energy sampling the efficiency of human creativity and spend more (but still not too much) on removing barriers that limit creativity.
What pop nonsense. An unpredictable software engineering team is itself a barrier limiting the creativity of the product team (and vice versa). If you don't want to be treated a crank to turn, you also need to bring some self-reflection, compromise, and willingness to communicate to the table.
Picking the 99% or top decile or whatever is not padding, it's the actual job of estimating. Anyone who talks about "padding" rather than uncertainty doesn't understand estimating yet.
A product team without an engineering process can reliably do nothing (but not vice versa). If you don't want to be treated like an optimization that only makes sense in a large money-printing machine, you need to stop optimizing for money-printing.
Communication isn't just about gracefully accepting expected input and feeling good about getting it. It's also about reliably figuring out the logic behind unexpected input to gracefully bridge the gap.
So hacking means creatively staring at unpredictable systems until they make sense. You can timebox how long you can afford to stare at it and then incrementally review whether your staring method should be improved or whether it makes more sense to do something else after the timebox but you shouldn't poll the state of the world every day.
TL;DR: Stop opening the oven door every 5 minutes, you're losing heat every time. Muffins don't like that.
It does not matter that you can estimate 99% of [simple] tasks. The total time is dominated by complex (big uncertainty) tasks that you can't predict/estimate reliably (e.g., think lognormal random distribution).
Programmers are good at automating predictable boring tasks. If you are competent, there always be unpredictable elements in your work.
Software estimation is similar to the coastline paradox if you don't know how small your measuring stick should be in advance. The smaller the stick the longer the coastline might be (fractal nature). An analog of taking a big software task, splitting into several smaller tasks and using it as an estimate would be like drawing a square on a map and rely on it as a good estimation for the coastline length that you can use for any scale (it is wrong if the coastline is a fractal (if the software task is complex)) https://youtube.com/watch?v=I_rw-AJqpCM
I think the purpose of this ratio is that a predictable low is better than an unpredictable high. It's sort of the same idea as padding out an ECD. Better to sometimes impress than sometimes disappoint.
It is better. If you have a predictable low rate other people can still plan their schedules around you, while you figure out how to speed up / scale up. If you have an unpredictable rate, you block everyone else (and often yourself).
That's a fair criticism for certain contexts. Most people I've worked with desire to do well and to feel good about their accomplishments. In those sorts of teams, this works. It would likely not work as well in an environment where people were just hoping to do the minimum possible.
Wouldn't you prefer (committed + 1) / completed? Or (committed + 2) / (completed + 1)?
In the limit, you want this to approach 1 (namely, you complete more or less everything you commit). A bad situation is where you're overcommitted and have many committed things, but few completed ones (e.g. 10/2), which results in a high ratio. But, with your proposed metric, you'd achieve the asymptotically ideal ratio with 2 committed projects and only 1 completed one, and in fact the global optimum is "I promised nothing and did nothing, yet achieved a ratio of 50%".
Haha, you're right, I meant `(completed + 1)/(committed + 2)`. A value close to 1 means you completed a lot of what you committed, and a value close to zero means you completed a very little. I stole it from Laplace's rule of succession.
I liked the ideas I read about 'kanban'. If there's a feeling that work flows well, then there's less concern about when things will be done by. If you can maximize flow, or minimize work in progress.. a kanban board should try to be a visual aid to support that. -- "commited / completed" sounds along the same lines to me.
In that sense, I think "it's desirable, but not in progress" isn't necessarily bad in itself. I think it would be better phrased as "this isn't a priority for us right now".
But how do you encourage people for difficult tasks? It is easy to commit myself to correct a particular spelling in a lable by tomorrow. But to fix the indeterministic bug that crashes our top client's server every now and then?
You could commit to something like, "Investigate the logs for two hours" or "inspect memory dump" for an hour. From there, one can add more detailed tasks.
Nothing. There is also nothing to encourage people from not taking forever. This metric worked well on my team until I got people who game it by pulling easier tickets and then sitting on them for a week. We still use the approach though. No approach survives a bad team. I could implement draconian rules, but that would just hamper the productive people.
Though if you are interested in a good book on loosely related subject, highly recommend "Principals of Product Development Flow" https://amzn.to/3Ox4PgB
This article is very thorough, but makes one fairly big mistake: it assumes there is low variability in task size.
This could be a reasonable assumption if you're very good at adjusting scope, but I would be skeptical unless you have measured it and have the numbers to prove that in practise, tasks indeed turn out to be roughly the same size (±20 % or whatever -- I don't know the exact threshold.)
Why this matters is that under cheap context switches and even modest task size variability, it's -- surprisingly -- a performant queuing policy to pre-empt the currently processing task to handle every incoming task. The intuition behind this is that if a task is still processing when a new one comes in, it's likely to be a "big" task, and it's worth letting small tasks ahead in line. Reduces mean response time.
Of course, this could all be moot anyway because context switches are certainly not cheap for software engineers. I just thought I should mention it for nuance. Rules of thumb only get you so far, at some point you have to more accurately model the situation and simulate.
Speaking to the task size point, I’ve found (via leading a number of teams and training many more team leads) it’s possible to decompose any task into smaller tasks so that every task is within a small constant factor of each other.
If it seems impossible for a given task, maybe because it’s particularly complex or has lots of unknowns, then no problem, simply create a “spike” task to research or prototype whatever you need to in order to decompose the big task into appropriately sized chunks (for your team’s idea of appropriate).
See you end up with a bunch of tasks that are all roughly the same size or are spikes of similar size, and you can have the very nice predictability and productivity that the author is speaking about.
(btw, I also agree that contact switches are never cheap for software developers, so it’s better to let them finish whatever they’re working on whenever it’s not too unreasonable to allow it)
This has been true in my personal research too. When projects are sufficiently decomposed, a "ticket" about implementing part of a project is generally speaking within the same order of magnitude.
However, this article is not talking about "tickets" but rather about the entire project. So that you may or may not be able to break the project down into roughly-constant chunks is irrelevant for the arguments of this article.
What matters is the size of the full project that you need to see through from start to finish in order to get business value out of your work.
(Yes, you can usually break down the project into smaller chunks too, so that you see business value quicker. However, these are rarely of constant size anymore, in my experience.)
you have to be careful about pushing out an early MVP for "business value". my company tends to do this but the MVPs are so far from complete that I'm skeptical we're even getting good data out of it
This assumes that tasks can be defined as small and large ahead of starting them which goes back to a similar mistake that you pointed out - our estimates need to be true and have high confidence. Estimates for tasks have high variance and some of them will turn out to be inordinately complex, though not clear at first. Second one is that it assumes all tasks are of same priority and the utility of all tasks are equal.
Both of them are not true for software tasks. The assumptions behind the queueing policy are not true here.
One of the neat things about the "always pre-empt the currently processing task" is that you don't need to know the size of any task! All you need to know is that it's variable, which you can find out from past data.
I think those initial diagrams/explanations miss a key detail. The longest step in cooking burgers is the grilling step, in which the cook is just waiting for the burgers to cook, and won't speed up the process by giving attention to one burger instead of 2. The idea that 3 burgers would take 12 minutes while 1 takes 4 isn't realistic, even without all the batch processing math later in the article.
I still agree with the idea teams should do one thing at a time in general, but if you are going to do anything in parallel, it makes most sense to do other work when waiting for something to complete (as in the grilling step in this example)
There's a book, Everything In Its Place, Dan Charnas, which does a good analysis of the analogies of cooking and productivity in the office.
Basically it recommends doing the "process" tasks first, the ones that the rest rely on. In this case, it would be grilling burgers. In the office, it might be something like assigning tasks or approving PRs.
Charnas recommends that planning starts by "first things first" as opposed to say, the Tim Ferriss, Brian Tracy, or Eisenhower methodology of most important things first. And often the first thing is to figure out which things come first.
And yeah, the book also recommends finishing where possible. A dish 90% done might as well be 0% done, but it occupies mental space while it's not done.
That's a great read, although I remember reading it published under the title "Work Clean". A cursory glance at Amazon.co.uk looks like the two titles have identical text however the Kindle edition of "Work Clean" is half the price of the one re-issued as "Everything In Its Place", for anyone else interested...
This was the most glaring flaw of the post. They almost got there when they started talking about the “transaction costs” but failed to acknowledge idle times.
To drive this home to software development, if I’m blocked on a requirements clarification, awaiting permissions to a data source, or blocked by a bug in an external system it certainly doesn’t make sense for me to just do nothing. I should pick up another task and work on that.
It’s going to be near impossible to substantially limit WIP below the level that ensures near fully committed personnel. That means that as external blockers go up, so will WIP. It’s important to try to fix those things causing blockers (better up front requirements, automated permission processes, etc…) but different teams and organizations will have different fundamental limits on being idled on projects. For some software product feature teams this can be kept very low- a strong requirements process is established, the data needed is consistent and owned by the team, and reliance on 3rd party systems is low. Teams dealing in enterprise IT systems (like mine) often can’t, as much as we seek to improve things, there is a degree of irreducible complexity.
Strongly agree. I try to limit my WIP tasks to around 8, and even then it's a constant struggle to keep at least one unblocked.
The burger shop analogy seems a little nonsensical too. Most of the time it takes for a burger to be made is waiting for the meat to cook. Conveyor grills exist, it's rare for grill space to be a limiting factor. So, burgers absolutely are assembled in parallel, it's a good example of the opposite strategy to the one the author is advocating.
Now we're really taking the analogy apart until it's no longer reasonable, but I'll continue:
Sometimes it makes sense to stand around waiting for the burger to cook. Maybe all you have to do are low-value burgers and it's nice to maintain an idle chef for any high-priority burger tasks that come in.
Or there are important but non-burger tasks that need to be done but are easily deprioritised in favour of burgers.
Where it really falls apart is that a restaurant has a predefined menu and no items are allowed to have unknown complexity (someone asking items out of menu just gets a “no”). Estimates are based on actual measures, not guesses.
There will be accidents, but those will be rare enough to not have to be planned for.
In that respect, a restaurant queue is a lot more akin to a factory, and is fundamentally different in nature.
I just read The Goal, and that video has translated one of the lessons from that book, which is about industrial parts manufacturing, into the Agile software dev world.
Fact is, a team with five people working on five independent tasks may be able to proceed at high throughput, because of the cost of coordination when you assign multiple engineers to the same task.
That is, if you have chef A making a burger, and chef B making a salad, and chef C making a cake, then they can all go at full speed because they're using different parts of the kitchen and don't need to coordinate with each other much. Individuals pay for context switches, but teams can assign different tasks to different team members.
Or maybe I don't understand the point the article is trying to make here. It seems really obfuscated with metaphors involving burgers.
The problem becomes a lot more difficult and interesting once you start involving multiple people.
Should everybody have many or exactly one task assigned? Should one person be on stand-by for operations? Should tasks be paired to two developers? Whole team mob programming one task? Are all the tasks truly independent?
Should juniors in team have same WIP as seniors or is the limit counted for the team as a whole? I see some teams almost give juniors a backlog of their own with the most basic cookie cutter tasks, even if they are low in prio, just to get them on board. Other teams handcuff them together with a mentor doing normal tasks. Vice versa for tech leads who often have another set of tasks only they do.
If everybody is working on a task, then who will code review others work? This can end up in a priority inversion situation, if Alice finished working a high prio task, should Bob working on a low prio task drop his work to review Alice work or should he finish his existing work to keep within his WIP limit.
Article relies on a non-sequitur assumption that you should work on an entire feature start-to-finish without interruption.
What management ought to do is to split feature work into tasks of smallest granularity as possible, then schedule only those of highest priority. This is what actually reduces batch sizes. Then deciding to prioritize the work to finish a feature instead of new work for a different feature becomes a matter of discipline, not process. This is important because it allows management to stop throwing good money after bad on features that the business decided that they don't actually want anymore.
If Development doesn't deliver business value because Product can't stick to a coherent feature strategy, that's Product's fault, not Development's.
> What management ought to do is to split feature work into tasks of smallest granularity as possible, then schedule only those of highest priority.
I would say this is pretty standard, and personally I really hate it. I think it works well if you want to prioritize hitting a date above all else, especially with a junior team or in a "low trust" environment (cheap off-shore team). But for other metrics, I don't think it's great.
In my experience it results in a lower product quality, people do the bare minimum to finish a ticket and throw it over the wall. Since tasks are so split up, things don't end up connecting coherently. The problem you're trying to solve for the user gets completely lost and you end up with a bunch of features that don't necessarily make sense.
In my opinion, it also really sucks for job satisfaction. It makes me feel micromanaged and have 0 autonomy. But I have met people that love it, their reasoning being "I can just zone out and write code without having to think about other things". So that's more of a personal thing.
> If Development doesn't deliver business value because Product can't stick to a coherent feature strategy, that's Product's fault, not Development's.
While you might technically be right, that's not a great attitude to have.
I'm not arguing that Product should sit high and mighty and refuse to listen to Development. Great ideas can come from everywhere, including Development, of course. But understanding what ought to be built, at least for functional requirements, is fundamentally Product's job and their decision.
What great teams do is that they have a Product guy sit on the same team as Developer guy(s), precisely to avoid the malaise you describe.
I agree with the general idea, but I'm not sure putting development and product in an antagonistic relationship will improve things.
They're different professionals with different ideas of what's important. Someone in product is unlikely to understand the development cost of leaving something half-finished, because recognising it as a maintenance tar pit takes development expertise that can be hard to articulate (as expertise tends to be).
What's needed is close cooperation, not finger pointing.
I wish more people would read this. Both leads and devs. Finishing is the difficult part of software development. Conversely opening new projects/features/bugs is very easy. Opening many tracks in parallel slows down the entire team to a crawl to which the standard workaround is to simply work more hours o cut corners and take shortcuts thereby increasing tech debt as you go. Fight fire with fire.
I also find people are twisting definition of done to make it look as-if it's over. Something being coded usually isn't done. It still needs to be tested which is the labour intensive task. Something being tested isn't done, it must be shipped and released to production. Code in production still isn't done. It must work and be performant enough, might need more logs, might need more monitoring. Last but not least it must get into the hands of the users who give the final word. But even then it might not be completely finished.
I believe that those first explanations and graphics are missing an important aspect. The step in the burger-cooking process that takes the longest is the grilling step, during which the cook does nothing but wait for the burgers to finish cooking. The cook cannot speed up the process by focusing on only one burger at a time instead of both. Even without all of the math on batch processing that is presented later in the article, the idea that it would take 12 minutes to cook three burgers while it only takes 4 minutes to cook one is not plausible.
Do you think you're considering the burger metaphor, ie the type of task, and not letting the point of the article stay front and center? I think we can agree that not all development tasks are the same, but I think the premise of first in first out is better than large bulk concurrent tasks, which I thought was the premise of the .
A real world example of the waiting to cook metaphor is PR review and client/QA validation.
Imagine you’re working on an integration project where you use an API provided to you by a third party, there will be a point where you coded a client, you made the calls, and you’ll need their validation before moving that specific task further. And when they green light your implementation, you send that to QA which will review your feature in their next batch.
If you consider your task completion to be “have the service integrated in production”, you have least two stopping points where focusing more on your task won’t help you go faster.
The optimal move is to plan for those and fill the gap. It becomes trickier when it’s unplanned: you were expecting for something to work, but you’ll need to wait for a bugfix that will only comes in X weeks, for instance.
Though I agree with the premise of this article, I simply can’t get past his burger example. Let me explain.
Multi-tasking has exponentially greater impact on performance the greater the complexity of the task. Most people just don’t find themselves doing these kinds of complex tasks on a daily basis. This is why coming up with a simple example that lay-people can understand is so hard. Simple examples don’t pass their sniff-test. It is obvious to anyone thinking about it that I can make two burgers simultaneously and it only adds a fraction to the overall time. This is because the complexity of the task is extremely simple. In reality it would add seconds, not minutes.
However, once you ramp up the complexity of the task this penalty does approach 1 to 1, or greater. It is obvious to anyone that having to do a heart transplant while also doing your taxes at the same time would require longer than either task individually. An example like that is also easily dismissed because it is so far-fetched.
I find that to really impress on people the penalty involved requires an example that is personal to them. If you can find some complex tasks that they do even rarely and have them envision doing them simultaneously, then they are more likely to buy into the idea. Let’s be honest, most people don’t find themselves in this situation like programmers do because of the deep thinking that programming requires, but if you search hard enough you can usually come up with some personal examples for people. If the person you are trying to impress is important enough in your life (spouse, boss, etc), then it is worth the effort to find some individual examples.
I found the example used in an anti-multitasking book I read to be both simple and compelling. The idea was to write all letters from a to z and all numbers from 0 to 25. Compare the total time between writing the alphabet first and then the numbers (abcd...; 1234...), vs. mixing the two (that is, a1b2c3d4...). The separate approach wins by a large margin, and is accessible to anyone to try themselves.
"It's effortless to switch from working on one burger to another"
This is why software engineers should be banned from writing about anything outside their very narrow domain (and I say that as a software person).
There's context switching when working on different features, sure. But there's context switching when working on different parts of the burger, too. Slicing two buns is faster than slicing one bun, then coming back from doing something else and slicing another.
There's context switching in both scenarios. Heck, switching between writing a spec and working on the implementation is a context switch too. The switch between features is just generally more significant than the one between the different phases of working on the same feature compared to the steps of working on a burger.
Heck, this also completely ignores that the chef likely doesn't ONLY produce burgers and even when they do not every burger will necessarily be the same. In fast food restaurants you'll actually see chefs do some steps in parallel (e.g. slice all the buns) and some sequentially. Additionally they'll often optimize to finish all items in an order in the same timeframe -- in all likelihood you're not a complete prick and actually want your friends to be able to join you for lunch rather than staring at you for 4-8 minutes while you eat your burger all alone.
The analogy is not just simplified, it's completely ficticious to the point of bearing no resemblence to the actual process it tries to use as an analogy. It doesn't provide a common ground by referring to something everybody knows, it requires actively ignoring what you may know in order to make its point.
As others have pointed out, the analogy doesn't even work for what it is trying to do. Tasks are heterogeneous (as are the features themeselves) and some steps involve passive wait time (e.g. waiting for CI, QA or reviews, or waiting for the grill to do its thing to the patties). Religiously linearizing the process by isolating each feature may actually often prolong the overall time in these cases.
When one uses manufacturing job (burgers) as metaphor for software development, then you see that the guru thinks it is menial labour as opposed to knowledge-dominated process.
Move along, there is little understanding to acquire from this post.
I know right? Software development isn’t grilling burgers it’s designing the grilling system and building the entire supply chain to get it shipped to the customer that is*.
The article is actually about this! How manufacturing processes are not a one to one comparison with software work.
> The reason many people fail to acknowledge and act upon transaction costs in the software industry is that they compare software — a design process — to manufacturing processes.
I once worked in a VERY chaotic startup with near-constant pivots and priority shifts. In each case, at the time, the change in direction could be justified... but morale was chronically low, particularly among software engineers. I spoke to one who, after a year, mournfully revealed that none of the code he had written had ever shipped. He, and most of his colleagues, burnt out and dropped out pretty fast.
I just left an environment like that after almost 2 years. It got to me so bad that I can't even remember what it's like to work somewhere more organized, which is why I'm excited to start my new job soon.
That's because it chooses a bad example of an activity for which production-line methods make sense. Pick an activity (that you need to repeat many times) where each step requires quite different tools and skills that are difficult to master, and breaking it up into tasks for individuals who specialize in said tools/skills is almost certainly going to get you better throughput. Even in the letter-stuffing example, if you imagine that if it required full/focused use with both hands of different tools for a) folding the letter b) placing it in the envelope c) sealing the envelope d) stamping it, then having different people special in and handle each subtask, even if it means paying a bit of extra time for moving the item between each individual, is likely to pay off (of course, if one of those subtasks takes vastly longer than the others, it may need 10 people doing that, vs one on the others, but that doesn't change the principle).
Obviously software development is very different - we're not asked to use the same tool repeatedly to produce the exact same result. Further, in our world a) switching between tools is usually relatively costless and b) developers are generally quite capable of becoming skilled with multiple tools.
But I'd still say that some of the same principles could apply, e.g. if those assumptions weren't true for whatever reason (e.g. switching tools means physically moving to a different piece of hardware in a different room, or there are obvious measurable differences in the skills of your developers as far as particular tools go), then splitting a task up so one part can be handled by one developer with one tool and the other by another developer with a different tool is probably going get you a better result than asking one developer to be responsible for "finishing" the whole task themselves. Arguably the important thing is that they be on the same team and consider themselves to have a shared responsibility for the task that's been split up. Which interestingly, often isn't how teams are organised in software development companies, from what I've observed over the decades. Having an end-to-end feature be the responsibility of a single team, even if it necessarily requires multiple specialists, is probably not a bad guiding principle - but on the flip side, many peoples aren't (for understandable reasons) keen on being shuffled around between teams depending on what each feature requires.
One piece flow doesn't mean the same person does everything. It could mean that multiple people do their specialty, but instead of working at their own pace with storage between them, one passes their work product directly into the hands of the next when the next one is free.
I have done the experiment with the envelopes that way too with similar results.
Cool demonstration. If I understand correctly, the one piece flow process is better because it eliminates the waste of putting the item away and picking it up again. We've overlapped the "putting away" of one stage with the "picking up" in the next stage.
Based on the summary I think the author just uses an unfortunately overloaded term. The "time thieves" aren't supposed to be the employees but inefficient processes slowing them down.
This article makes some assumptions that could easily change the result. By the way, if you have a batch size of 1 you're back to the initial hypothesis which is obviously not how things work in the fast food industry.
I think it misses the point of idle time and availability. From the supply chain theory, if your resources are always 100% busy, your delivery time will go infinite because as soon as something takes a little more time as expected then you can never catch up.
This applies particularly well to software engineering.
Most projects go though several design cycles, and this usually starts with a theoretical goal or a proof-of-concept requirement. I have found that identifying the “hard” features first, and defining fixed time-budget micro-projects or key unit-tests is useful. Primarily, this method quickly identifies if a project is even viable with the available team, resources, and knowledge base.
Saying “No” to projects which incur serious liabilities, is just as important as minimizing project scope. Unconstrained sisyphean commitments are a common feature afflicting those with Founder's syndrome, and can have detrimental impact on projects as the intelligent begin to jump ship before it sinks.
Every firm that lives past 1 business cycle will have a project boneyard. ;)
The assumption here is that delivering the product is the finish line. That once you finish the burger, the person getting the burger is good to go and they can enjoy the burger. The problem is, restaurants don't work that way. If they delivered a single burger to a table, the guests would be upset and would wait. The person getting the burger would be waiting until everyone got their meal.
You need to deliver all the meals at once, and you need to prepare all the meals in such a way that they can all be delivered at once.
If you need to deliver 4 hamburgers, you prepare them in batches, and deliver them all at once.
Sending out the order in pieces makes the end result much worse for everyone involved. Which is why it's dangerous to look at delivering features as the end goal. You need to deliver a complete meals.
Sending out a single burger doesn't always make sense.
It is readable, simple to comprehend for engineering, product, and management; it adds no value to the game; and it positively rewards finishing what you started. It is exceedingly difficult to precisely forecast when a whole product will ship, much less when a complete feature will be released. However, if one breaks a feature down into its legible constituent components and then commits to completing just what is within that (which also necessitates any required communication in order to arrive at an understanding of the requirements), then over time one can become quite good at predicting what one will be able to get done in a sprint and better at only committing to what one believes they can actually complete.
Releasing is great, but the 80:20 principle means that finishing 80% of features and throwing away 20% is always much faster than expecting to finish 100%.
The difference between real life projects and college projects is that often nobody made the same feature in the same context before. This means that there will be features that will take much longer time, and need to be shifted to the next release or a bin for "complex enough to focus on everything else".
If you do not understand why something like this happens, it may be that you are unaware of the knowledge gains from exploration as opposed to releasing the product.
You just need one great product, and it is okay to throw away ten explorations to make it happen.
A key philosophy of mine, is "Success begets success."
I'll set goals small, so success and completion are guaranteed, then raise the bar a bit, on the next one, and so on.
Soon, success and completion become habit, for the team, and the results are amazing. I end up with a team that isn't arrogant, but highly confident, and quite efficient.
"Succeeding" also requires things like careful attention to detail, good testing (and fixing), documentation, and other "boring" stuff. Once that becomes habit, it's just "background noise."
It just takes time, and that is something that seems to be at a premium, these days.
But regular success just feels good. I recommend the practice.
It seems to me that actually, productive and predictable teams finish what they start.
Going through the whole explanation, the base assumption is that tasks are clearly defined, there is few surprises, task pipelines seldom break in the middle because of external factors. Which is the hallmark of a predictable dev environment and by proxy, a productive team.
All the team has to do to increase it output is stop doing the tasks in parallel. Conveniently, there is no real explanation of the status quo, teams must have been really lazy and couldn't think by themselves why they were doing so.
This really feels like one more motivational preacher trying to sell that one weird trick to solve all your problem.
Don Reinertsen has some interesting arguments for all this based on keeping inventory of work in progress low, maintaining short work queues (amount of time things sit in an issue tracker), minimizing the number of things being worked on at the same time, and minimizing time to delivery (cost of delay).
Basically his reasoning is that overloading teams with more work than can be handled actually leads to delays and inefficiency. You get teams and developers waiting for each other and being blocked on each other. So, they start switching tasks and increase the amount of work in progress, which leads to even more blockage. It looks like everybody is super busy but it's actually very inefficient. Before you know it, you have lots of half finished things blocked on something and an issue tracker full of work that was specified months ago and probably is not even valid anymore. People get stressed and deadlines start slipping.
All this increases cycle times of individual work items. Reducing the amount of work in progress means things move faster through the system overall and less things get blocked. And when they get blocked, it gets resolved faster. Also people feel better about things because they get things done. And, they get feedback faster, which is also a good thing. The counter intuitive thing with this is that reducing the amount of work increases overall throughput and increases predictability.
I try to manage my teams and products accordingly by creating clarity on what it exactly is that we are building from week to week and what the priorities are. I like having small teams or groups of people rather than having big teams. Because that allows me to have more than one thing being worked on. When it comes to product management, I don't like specifying too much ahead of time. There's no point in having months worth of inventory of feature work that won't get worked on until months later. By the time work starts, half of it is probably invalid anyway because requirements and priorities always change.
I went through the video. For the HP example, their core issue was a quasi infinite backlog because they would add to the pile regardless of their velocity.
This looks to me to be a problem that is orthogonal to doing tasks in parallel or not: in the single task a time model, the extreme case would be an employee stuck on a single task without any lever to unlock it. Both scenarii are a failure of task management, and I'm not sure one is obviously better or less likely than the other. However you organize your tasks, you still have to manage your flow in one way or another.
My argument is, the amount of work in progress depends on the quality of your tickets, and not much about wether you allow "pipelines" or not.
> they start switching tasks and increase the amount of work in progress, which leads to even more blockage
Isn't your issue that none of the things in your backlog are effectively doable without blockage, and deciding to stick to a single task forever, or switching to many other ones might not actually have any effect on the output ?
I saw an egregious example of that in a contracting team, where one of the dev's job for a week was to go division B and request they unlock him. The rest of the time he was working on his personal project and reported "preparing the requirements for the next tasks of the sprint".
On switching tasks, I think it gets a bad rap because the worst cases are very simple to visualize: it's a pile. The reverse being nothing, and "nothing" isn't as exciting. The line doesn't go up.
It's a very common thing that I've seen in many projects. Bad inventory management basically. PMs just build a huge backlog of stuff. By the time people get around to that stuff it's months out of date and probably half wrong anyway. That's why waterfall is not a thing. Developers are not great at multi tasking. And switching tasks has a cost of making a context switch. If you spread things out over time, you basically get a lot of cost related to that and you stretch the time to initial feedback on whatever it is you are doing.
The smarter way to manage projects is to deliver stories, design, etc. just in time to have them ready for immediate implementation and thus shorten the cycle time from coming up with a thing to do all the way to having the thing in production. That minimizes context switches and ensures everybody still has everything that was agreed in their heads.
I've been on projects where PMs had 10 sprints completely planned out and they'd be shitting their pants by sprint 2 because it wasn't working out as they wanted it. That's not agile, that's waterfall and it cannot possibly work. But it's what a lot of people revert to.
Reinertsen's points are that there are good reasons that you can reason about from an economical point of view to do things differently that neatly align with a lot of intuitions developers have anyway. Queue theory applies to all sorts of queues, including issue trackers and backlogs.
In the weakest definition of "finish what you start", it means "I cleaned my desk, I finished what I started" but the reality is - the only way for this to be "done" is to have no more work and no more desk. The work of maintaining that desk cleanliness is never ending work and herein lies the problems we create for ourselves.
Take the burger example and put it in a system - you're on to the next burger and some shmuck wants it without ketchup and with extra pickles. Then you have to cleanup the burger stand but cleaning up one day won't be the same cleanup as another and sometimes you may have to shutdown to deep clean - it's never done until you forever stop making burgers and close shop.
For me, it's all about mindset. I look at work as rewarding when I never stop learning. The notion of done is what makes people crazy because it's never about the work being done, it's about not want to do the work to begin with and not seeing any value in doing it.
My kids rooms are messy, i say they're lazy, but really they see no value in the work - to them, it's a waste of time. It keeps them from doing other things they would rather do. I laugh when they say they're done cleaning their room because without a doubt, not even a day later, it will be a stink mess again.
Also, context switching... these discussions never make sense because they're always discussed in terms of state as an exception to the system. Your flow state is uniquely you, the system you operate in is complex and dynamic.
I think all to often people put way too many words to paper trying to control complex interactive systems when they should really make them safe and resilient to monitor, observe, anticipate what's next and have the autonomy to make decisions.
That autonomy and decision making and anticipation still requires effort and work - and i'll be honest, some of the best flow state is safely operating in complex dynamic systems while being able to mental model your position in it and have empathy for others - realizing that your model is yours and others will have there's.
I would like for more people to read this. leaders and developers. The most challenging aspect of software development is finishing. In contrast, it's incredibly simple to open new projects, features, and issues. Opening numerous tracks simultaneously causes the entire team to work at a crawl, and the traditional solution is to simply put in extra hours or take short cuts, which increases tech debt over time. Battle fire with fire.
I thought that was obvious? The easiest way to completely destroy the productivity of a team is to constantly switch tasks and never release anything. The team will quickly realize that the work they do doesn't matter and will switch to pretend-to-work mode or find another job.
kind of an "open door"/empty title now, is't it? It's like; When you work a lot but never finish something, you probably just did a whole lot of nothing.
If we all were just workers, it could work. In practice, I see lot of the high-level ideas/strategizing people starting new projects without finishing the old ones. They never bother with details required to finish or complete something. Starting and not completing is fundamental to labor division in capitalism, for better or worse. Without it, there would be less labor extraction.
Who knows if anything is finished anyway? So long as you live you can continue and the definitions are always in flux / personal.
I believe the one who feels the strongest pull should work on manifesting a vision, not the one who had the idea; i.e. if someone says "it'd be nice to have cake" and another goes "omfg yes that would be the best" then who should make it? They could do it together but if the first person shops for ingredients and then decides they'd rather go to hawaii for a bit then the second person can still make the cake and eat it without the world burning down.
Predictability is great in teams which know what they are doing. Oftentimes, in early stage startups, there is ambiguity in what needs to be under the WIP list.
While it is optimal to minimise transactional cost due to context switching in that case, maintaining a constant throughput in that scenario would be sub optimal.
Teams that have figured out the optimal way to get product and features into the hands of users as quickly as possible are the happiest teams because it's evidence that they're working together efficiently to meet their shared goals.