The most balanced and sensible paradigm I've come across is Basecamp's Shape Up[0].
I led an engineering team building validated systems that supported GxP processes in life sciences and found that a Shape Up-like process produced the best balance between delivering software that could be validated[1] and speed. I only came across it after several years of trial and error, but I was amazed at how it almost exactly described what we ended up doing.
A side effect is that work-life balance is amazing once you get this process working. If you know exactly how much work you need to do and when it needs to get done by for the next 4-6 weeks, then you can pace yourself much better. When you plan 6 weeks at a time, someone going out sick or taking some time off has less of an impact than when you plan 1 or 2 weeks at a time.
The customers and end-users know what to expect and when they'll get it; everyone gets more predictability in the process. Yes, it takes 1-2 weeks of planning at the outset, but we used that downtime for the engineering team to catch up on tech, build some experiments, work on side projects, or do research. (No refactoring since the validated process requires us to plan and document all code changes that make it into production)
I recommend it for any team that wants some sanity in the processes.
[1] Validated in this context means that every feature, function, and infrastructure component can be traced back to a specification through the full lifecycle from design to install to post-install verification and operations.
There's a very practical reason why waterfall fails in most software development projects: in the waterfall model you do project planning and budgeting based on the specifications you write. I'm not arguing against the specifications other than to say that in a lot of cases in modern software development you haven't got time to write all of those specs to the proper level of detail. But then you go on to do a critical path analysis. That requires assigning people to tasks and having detailed task descriptions at a very early stage of the project. The odds of that being right are very slim. It can tell you at best whether your project is plausible. Again I have no problem with that. But once you start managing the project, the agile approach of assigning people to tasks in iteration planning meetings works a lot better than trying to stick to your painstakingly created critical path analysis. You also don't need highly detailed task descriptions and implementation plans until you get to your iteration planning meetings. Then they are crucial. But putting off that work until you need it saves a lot of time at project start.
This really depends on the industry and the domain experience of the team. The deeper the domain experience, the more likely you are to get it right. It also helps to limit the scope of what you do (big part of the Shape Up process). Works well for life sciences, for example, where we're replacing a paper process with a digital process. Not so well for SV startups where first time founders are trying to figure out product-market-fit.
> ...saves a lot of time at project start
Often just shifting the work to a later cycle to get it right.
In life sciences when supporting a GxP process, validation (which is rooted in documentation and specifications) is unavoidable. In that environment, figuring out how to move fast despite the constraints yielded a Basecamp Shape Up like process.
I think every team that believes big-A Agile is The One True Way take a look at Shape Up.
One of the bigger issues with Scrum style agile, is that very rarely does it reach the maturity where "everything is delivered within a sprint." Even if you complete all the cards without carryover or other issues, the product isn't necessarily shippable. So in practice you some form of release planning, often quarterly to align with the business, that gets overlaid.
The big insight with ShapeUp for me, was that is really is more a change in how to approach Release planning, than sprint/iteration process. You really can ship an end-to-end feature for a wide variety of software products in a 6 week timeframe, and it is short enough to give a lean focus, outcome oriented on delivery rather than just shipping a code increment. Two weeks is too tight a timeframe to meaningfully pivot larger projects, or make decision about overall feasibility. There always is another 2-weeks to expand upon your sunk costs. But again 6 weeks hits more of a sweet spot in being able to step back and make roadmap tradeoffs.
Not true. Delivery doesn't necessarily mean Release. Before the "Continuous Delivery" mantra took flight, Scrum et al came with Delivery Plans and, crucially as a separate artefact, a Release plan. This was specifically to address the nuanced distinction between a bunch of stuff being "done" and "releasable."
Now it seems everyone is unable to distinguish between Continuous Delivery and agility.
Even under Continuous Delivery there's still a distinction between "delivered" and "released". It might be delivered behind a feature flag. It might be delivered to a staging/release-planning environment of some sort. (CD doesn't always mean "completes all the way to Production each sprint". The original idea of CD was just focused on development and QA environments to catch what CI doesn't about the usability state of the application as different developer efforts merge together for the first time.)
How do you plan a project without the engineering teams being involved? As far as I'm concerned, both the developers and QA need to be involved from the beginning; otherwise you get people planning things that aren't realistic (or are a really BAD solution to a problem).
The planning period involved all key stakeholders and buy-in. This includes QA, engineering, and the customer.
In a GxP validated release, everything starts from the documentation and ends with a sign off by the QA team before it's delivered to the customer who does their own validation process. So it was absolutely critical to get agreement between the teams.
I was Director of Engineering and represented the engineering team. Over a 1-2 week period, the stakeholders would meet to discuss the objectives and we'd draw up a rough outline ("Business Requirements"). Then each lead would take that rough outline back to their teams to determine feasibility (for the customer team, it was to rank priority so if we came back to make cuts, we knew what to cut). For engineering, we'd review each feature and breakdown to determine if we could do it all given our resource load and time frame ("Technical Design Specification"). We'd provide a rough idea of how we would implement it and consult with the QA team on how they would test it ("Functional Requirements Specification" and "Validation Plan") and if there was some angle we missed.
But because a lot of this ended up being negotiations and documentation ("Here's what you want, here's how it'll look, here's how we'll build it, here's how we'll test it, these are the compromises we have to make for now, this sub-feature will be released next cycle"), I ended up doing most of that "dirty work" and only consulted the team when needed and to get their buy in. This left my team with plenty of time to play with that new framework or build a sandbox for some new tooling or build process and so on.
Even if you are a skeptic, I strongly recommend reading the Shape Up guide. It's very practical and prescriptive.
I have not run a project using Shape Up, but my knowledge of what it consists of leads me to think it doesn't directly address some problems agile projects encounter. Specifically, Continuous Delivery can have problems because it does not directly replace waterfall QA test plans. It works for a lot of projects because unit tests, improved linting, and leak-finding tools cover a lot of what used to go into a waterfall test plan, but capacity, security, and other sources of bugs are not helped as much by CD. It looks as if it can improve intermediate deliveries of a project, but there are still things agile abandoned without having a replacement.
Shape Up isn't about continuous delivery; it's the opposite.
First, we work in six-week cycles. Six weeks is long enough to build something meaningful start-to-finish and short enough that everyone can feel the deadline looming from the start, so they use the time wisely. The majority of our new features are built and released in one six-week cycle.
Our decisions are based on moving the product forward in the next six weeks, not micromanaging time. We don’t count hours or question how individual days are spent. We don’t have daily meetings. We don’t rethink our roadmap every two weeks. Our focus is at a higher level. We say to ourselves: “If this project ships after six weeks, we’ll be really happy. We’ll feel our time was well spent.” Then we commit the six weeks and leave the team alone to get it done.
It's 100% worth the read if you are deeply interested in addressing the problems with Agile today.
That's correct. One of the commonest ways for agile to fail is to apply it to the wrong kind of project. If you have strong dependencies between tasks and things like lead times for hardware you will want to do project planning that can take those things into account early in the project.
> we're replacing a paper process with a digital process
It's worth noting that what you seem to be talking about it "replacing a process done on paper with the _same_ process done digitally". If you were actually changing the process, per se, (to one that achieves the same "goal", but is more suited to a digital approach) there would be a lot of unknowns; and a more agile approach would help.
That's not to say one is wrong and the other is right; just that they are two different scopes of work. And, for some situations (government paperwork / regulations), changing the actual process may not be viable.
> assigning people to tasks in iteration planning meetings works a lot better than trying to stick to your painstakingly created critical path analysis
Imo it depends, and I have been in projects where that approach is also somehow much too late and some answers must be there earlier, or the big picture and some critical paths somehow need to exist. E.g. fine if you can do just software, but what if you need hardware design, or think of production lines? World is complex.
> A side effect is that work-life balance is amazing once you get this process working. If you know exactly how much work you need to do and when it needs to get done by for the next 4-6 weeks, then you can pace yourself much better.
For you this may sound like heaven, but for me (possibly ADHD?) it sounds a little closer to hell. It reminds me of my school days; short assignments were fine. Large multi-week projects were a bit terrifying.
So in short, you need to account for a multitude of personality types. People with shorter attention spans need shorter deadlines. People with longer attention spans need less pampering. Pick something that works for both.
They're not multi-week projects per se; more like:
1. "This is a big feature we are shipping in 6 weeks"
2. "Here are the 20-40 tasks that need to be completed to implement this feature"
The only difference is that we've fully planned out 6 weeks of work rather than planning 1 or 2 weeks at a time. I would summarize and say that the difference isn't in the execution; the difference is in the planning. My experience is that it's the opposite for those with short attention spans; because everything is planned out, the engineering team can then kind of walk up to the "buffet table" and pick what they want to work on since you have the full spread.
Within that 6 week cycle, we would still build, iterate, and test internally. It's not one giant ticket, but still broken down into parts. At the end of the 6 week cycle, there's a very clear picture of what will be released and there's a formal validation and release period that mostly involves testing and ensuring that the product meets the defined specification.
This sounds a lot like Scrum with longer-than-average sprints. Or maybe its Kanban? I've lost the patience to pay attention to the intricacies of them all.
The length of the cycle is going to vary based on your product and your customer. There's a lot to be gained by delivering rapidly if you can do it, with feature flags to hide what you want to keep hidden. 6 weeks for a release is a pretty long time to wait for most projects.
Personally, this is my favorite approach, and maybe you'd agree to a point:
1. Team that is focused on the one "big feature" at a time. Maintainers operate on a different team.
2. Big feature should have a planned duration that spans multiple iterations, but not too long. 6 weeks sounds reasonable.
3. Test and deliver as rapidly as you can based on your capabilities. If all your testing is automated, consider Continuous Integration at the very least. If you rely on a team of testers then come up with an iteration length that gives testers enough time to regression test a snapshot of the code to prepare a release candidate, while still allowing enough time in-between for testing new changes.
The reason 3 is important is because, whenever possible, you want to avoid the terrifying mountain of risk that comes with releasing code that has not seen the light of day for months on end. Typically the longer you spend preparing a release, the longer you have everyone panicking when the deadline is approaching and this massive release has a massive backlog of bugs.
The book is very approachable and free in digital form.
I strongly recommend anyone that has an interest in this space skim it and decide on your own. Even if you end up disagreeing with or finding faults, it adds perspective and perhaps there are a few ideas that can be applied.
On point 3, that really depends on how rigorous the testing process is. One of my favorite essays on the topic of software engineering is "They Write the Right Stuff" [0] which describes how NASA writes their software and why/how they achieve incredibly low defect rates. NOVA has two documentaries: "Pluto and Beyond" and "Chasing Pluto" that puts this into perspective. The New Horizons team was formed in 2000, the satellite launched in 2006, and 15 years after the kickoff reached Pluto. In the documentary, some of the folks had sent off their kids to college by the time New Horizons reached its primary mission objective; truly a life's work.
The stakes aren't that high in life sciences and most programs, but in processes that involve GxP, there's generally a risk of injury so it is still higher than say something like Notion or Reddit.
True: even with the most rigorous process, there will be outliers in the real world (for example, we would once in a while run into unprocessable PDFs which had embedded entities). But the software I wrote in this span had an incredibly low defect rate. By the time it had reached production, it went through several cycles of informal testing, formal internal testing and validation, and then customer testing and validation. Also true: this doesn't work for all types of software. For consumer software, I'd release that last stage incrementally instead. Incorporation of automation is absolutely critical; in my quest to speed up this process, I ended up spending a lot of time experimenting with E2E test automation[1] to focus our test and documentation efforts.
Wow. This is what I learned almost 25 years ago. Documentation is king. It beats pair programming. It beats knowledge transition. It is supposed to be a living document. You are supposed to track-note where you took a different path.
All design methodologies have their drawbacks, but I think the most hellish design method is "Agilefall", where you use the trappings of Agile/Scrum/Kanban development (Jira, standups, tickets, etc.) to implement a series of hardcore requirements the clients thought up before anyone started building anything.
I worked at a software company that built extremely sensitive survey applications (iOS, Android, and web-based). A significant part of my job entailed breaking down a literal design specification document into Jira tickets, which were then worked on by a rotating staff of programmers. The rotating programmers were assigned to 10-30 apps at one time and often had to make enormous customizations to the base product. The base product was formed out of 15 separate sub-products whose teams did not correspond with each other. The clients were allowed to make broad changes to the entire system by (a) changing the design document (b) emailing furious demands to the sales team/my manager (c) adding requirements to the master contract and not telling us. There was no traceability established between the contract/design doc/tickets, and I was not allowed to standardize any of those documents. There was no standardized format for writing the Jira tickets, so every person on my team made up their own method. The design document was missing about 40% of the context necessary to deliver a working app, so I added that context into the Jira tickets, but the clients weren't allowed to review the Jira tickets. I spent 20% of my time in "Scrum" meetings including sprint planning, sprint reflection, and hour-long standups. My company was worth hundreds of millions of dollars. I made a fat chunk of change in a year and a half, put a down payment on a house, and quit.
What's the point of standups, anyway? It seems like it always addresses the wrong people.
I don't give a rat's ass about what other developers have done. Or, more accurately, I do care, but their work is highly visible. I don't need them to retell the same story. I already know what they are doing.
What is not visible is what the leaders at the company do all day. Having a better understanding of their work would be far more impactful in allowing me to make better decisions around my work. Yet, if they show up to standups at all, they don't say much.
In my opinion, the "standup" is a cargo-cult imitation of what effective teams do informally. Effective, small teams all know what the other team members are working on because they tell each other throughout the day and at the bar after work.
Once you add a formal "standup" and make rules about how it's supposed to proceed, you inevitably introduce a point of inefficiency. The people who already talk to each other are wasting their time listening to stuff they don't care about or already know about. The people who don't already talk to each other are having a "character flaw" compensated for by bureaucracy.
It's similar to what happens in school. The smart kids are slowed down, and the stupid kids are improperly carried forward by the system without being allowed to fail naturally.
Well, it does mean what I think it means because language is descriptive, not prescriptive :)
I will never forget sitting in a 4-8 hour meeting for a day reviewing our entire dependency tree, and whether or not SJ-BCOMKA-4.11 had a dependency on SJ-TARGET-4.32. Management moved our "productive percentage time" from 80% to 85% to make the schedule look good to upper management. The scars of "waterfall" run very very deep.
Second, it's clear you need to reapply anything written 35+ years ago to modern sw development. For example, "operations" as a stage of sw development, whereas today we have devops vs SRE vs production engineering vs whatever.
Things like cleanroom development, zero defect software, etc are completely under utilized by software shops today, because we price the cost of maintenance & bugs far far too low. Which is was also unfortunately missing from the waterfall paper compared to Capers Jones or others analysis that showed it's not just that you should have preceding steps to "coding" - the preceding steps (aka "analysis") remove defects from your final product far far faster.
I recall once developing a new module for an existing system in a more waterfall-like approach than one would expect nowadays and it actually worked out nicely.
There were about 50 pages of specs for it: descriptions, wireframes for the UI, how everything should fit together and what the typical workflows with it would be. All of this was available ahead of time, before writing code even started, which actually let me get a better idea about the expectations and needs, than focusing more on immediate needs in a more iterative approach. I could also discuss any uncertain items and get some other questions answered, to avoid confusion later down the line.
It helped me write the code better and anticipate how to structure things, so less refactoring needed to be done. I could also focus on the "happy path" first and foremost, getting the critical functionality in place and then adding the nice to haves around it. Admittedly, there were problems with the specs, which I brought up and promptly solved when it became relevant, so the final solution differed slightly, but by then the specs were also updated.
With the critical stuff done early, it was also possible to release the thing early on and then switch to a more iterative method of development, with improvements and all. Not quite the traditional waterfall approach, not exactly scrum either, but something that worked nicely regardless. Complete specs (even an overview of the big picture) can definitely be a nice to have in some cases, if doing that additional planning at the start isn't too much of a burden. Take whatever works from all of the different methodologies out there.
This seems to skip over the main issue I've had with waterfall - management picking dates before detailed analysis. So you get a nasty situation where the detailed analysis eats into your available time for the project.
This is mostly fine for people like UX, product or up-front architect types who don't have to deliver the software or deal with crunch time at end, but everything depends on what they spec out.
So, sorry, not sorry, I greatly prefer agile/incremental work to that.
Nothing wrong with management picking dates, as long as they understand that they're movable.
Find out that there's more work during detailed analysis? Move the dates or change the scope. Find something missed in analysis during implementation? Ditto. Find it during debugging? Ditto.
The problem isn't picking the date. It's the response to the date being wrong. (The date is usually wrong, unless you're in a domain that your team knows very well.)
And why do they respond badly to the date being wrong? Because they think the date can't be wrong. They expect it to be right. Well, if experience has shown us anything, it's that the date is wrong.
And this is what's wrong with waterfall. They take the initial planning as cast in concrete, as brought down by Moses from Mount Sinai on tablets of stone. It's not. The initial plan is written on sand. Acting like that is true is the difference between waterfall and agile.
> Nothing wrong with management picking dates, as long as they understand that they're movable.
Or the scope is flexible. This is basically why I prefer "agile" or incremental work. I'll do my best to work on the highest priority things given to me, one step at a time, and I'll leave no loose ends every week/month/etc that would prevent shipping software.
If we get to that management-picked date, the choice is theirs to determine if the product is good enough to ship.
This is the only way I've seen "scope negotiation" actually work. Otherwise you waste tons of time in meetings (while the clock is ticking) negotiating future hypotheticals, or product pushing you to cut corners writing tests or whatever.
Arbitrary deadlines also occur in agile scrum in my experience. Management says stop working on project X and start working on project Y. Project X backlog is effectively canceled and the product is in maintenance mode
> This seems to skip over the main issue I've had with waterfall - management picking dates before detailed analysis.
Yup. In Agile though, the opposite happens, we the devs are encouraged or coerced to agree to work tickets whose scope is not baked enough to start serious coding which should happen during planning or grooming...except raising a question you often get the "what part is not clear" condescending tone which discourages discourse and fleshing out of the ticket's scope.
"Waterfall" has become a straw man, to the extent that an article merely arguing that X is better than waterfall is wasting time that could be better spent on saying what X gets right.
Similarly, arguing over what "waterfall" really means is just so much flogging of a dead horse.
It was always intended to be. Royce told that it was risky and invited failure and that organizations should include things like iterative processes to hedge against those risks. Basically, it was created to use as an example of what not to do.
another over-arching observation here is - how is it that we decided that there was going to be one set of process rules that was going to make us an effective organization. if we just ... use a 5 day development cycle, and a team meeting on Thursdays, and estimate tasks using a 5 level system and readjust on Tuesday then all our problems are going to go away.
some process is necessary, but its not the heart of the matter
I was being cheeky but true waterfall has iterative and self reflection built in yet these were rarely acknowledged and practised. And it honestly takes a couple of redos to get it all figured out and really understand the whole problem enough to make radical leaps. I've seen some projects go right to the ultimate solution only with exceptional people that have a very similar project already under their belts.
Does it bother anyone else that we use the word "sprint" in agile?
FWIW, that's not a universal thing. A lot of people use "iteration" in preference to "sprint" - but sadly, not enough. I think the term "sprint" should be abolished personally.
Does it make sense to always be "sprinting"?
Absolutely not. And it may be "just a metaphor" but I think the metaphors we use matter. And you absolutely can't be "always sprinting". It's an absurd notion.
It's meant to be a metaphor only for distance, not for speed.
In running/swimming, the short events are called sprints. In contrast with long-distance events.
Does anybody have a better metaphor, a noun word that suggests a short distance, in contrast against a long distance? (E.g. "block" or "jump" or "hop" don't really work because none of them inherently communicate the idea of shorter -- a hop is longer than a step, for instance.)
I do customization projects (the whole waterfall from requirements, specification and dev to docs, packaging, delivery and bugfixing) on a business application (that is developed in a more agile team).
The customer wants a specific custom feature and once they have it it should be done and over with the project. They want to know the cost and they want to know the delivery date (the actual person I am talking with is a sales rep who is talking to the real customer, so they usually want to know those things in reverse order, but that’s manageable).
The well-known problem: The customer doesn’t know what they really want until they see a first iteration in practice. The solution: train the sales rep (by courses and experience) to join you in becoming an awesome author team for req and spec documents. Keep the scope small enough to be able to oversee all cross-effects of the new features. Keep the discussion alive with the customer to talk about these effects.
Will this always work? No, I had absolutely insane projects out of everyone’s control. But it will work most of the time and if it did not the cake was just too big to swallow in one piece. Bottom line: If waterfall fits or fits not depends in my experience on the scale of the project.
This is the same process I learned after too many years of unrealistic expectations and turning relationships tense by "change requisition clients" to death.
Estimate the overall project effort the best that you can, then add 20-30% for risk/scope unknowns. Then pull 30% of the total level of effort in to a small contract that covers planning, specs, UX/prototyping and perhaps a limited scope technical PoC.
Ms. Pahlka crawls through some large-system horror stories, and some success stories. The horror stories share something in common: design documentation that outlives its usefulness, takes on a life of its own, and drives implementation decisions in harmful directions.
Brandow's article (the subject of this thread) emphasizes the usefulness of documentation. And it's certain that long-lived software systems (like his php example) need excellent documentation. That documentation is targeted at people like many of us. It answers questions like, "how do I use that blankety-blank IMAP module?".
But long-lived documentation is often misunderstood and misused. For an overstated example, some "non-technical" administrator might read the php docs and somehow infer that "systems developed in php MUST use the IMAP module, or it won't pass acceptance testing." Then, the developers -- employees of some contract development team -- will comply. They'll have to figure out some way to incorporate email store-and-forward systems into the product, even if it makes no sense to do that.
There's a danger inherent in extensive documentation: it WILL be misunderstood in the future, and those misunderstandings may turn out to be costly. More systematic documentation -- for example boilerplate "scope" sections at the beginning of docs -- are not a great way to mitigate that danger.
So funny thing about software prototypes. Time and again a prototype is pushed into production by greedy management seeking immediate returns - and then the actual system is never done.
If the MVP stays in production without further investment, then it's the product, there's nothing minimal about it. That's a decision.
If a prototype or PoC is in production though, something's gone wrong. I build my prototypes and PoCs in such a way that they literally can't be in production, which is I think the correct way to build a PoC anyway.
Not really, but it does bear repeating, as many in sales, purchasing, project management, management in general, politics and possibly more, does not seem to grasp that iterations are an absolute requirement in a software development process.
If we where able to design and deliver the exact requirement and perfectly estimate the time required, then that time would be zero, as the software would already have been written.
The biggest problem is that we don't have adequate tools for writing requirements. Even wireframing tools convert the process into graphic design problems.
https://uidrafter.com is a free tool that domain-experts can use for prototyping without graphic design concerns.
Historical note: that line of thinking also got us UML, which is now largely used as a guide to which shapes to draw on a whiteboard.
Not saying your idea doesn’t have merit, just that I don’t want to end up with a dusty Universal Requirements Language book sitting on my shelf. Anyone who wants such a thing, learn how UML was written and then don’t do it that way.
A waterfall process has its current common perceptions. Knowing that, I would be skeptical of a project that declared that it was following a waterfall development process without clarifications of deviations/adjustments.
Nah, "waterfall" means exactly what people think it means.
For one reason, because that's how language works. But on a more impactful way, because what people think about it is exactly what was widely practiced at the time the paper was posted.
Corporations did hire some requirements gathering consultant. That consultant would make a lot of documents, sign them down and go to another client. Then the corporation would do an auction and get a "supplier" based on the requirements docs, make a contract and sign down that the docs completely described what they expected and was exactly what should be delivered. Then an administrator would test the received software against the contract, and shove it in a shelf where the software wouldn't get in anybody's way, because nobody would want that thing.
This is exactly how large corporations developed software.
Of course, the researcher that coined the name was so out of touch with the industry that he thought he was creating a strawmen. That makes his description not very useful, even though it doesn't detract anything from the paper's quality.
And to be realistic, also owed to their size also some things need to go that way?
And it is also wrong, because even those corps don't throw the product away, but start iterating then usually. And unfortunately on the other hand "modern", scrum becomes just smaller waterfalls. It all seems to be just a spectrum of sprint length and size of your feedback loops?
But I'd encourage everybody to not just bitch about it but try to live that situation, a big tanker is just much different steering than a speedboat. Unlimited scrum trainers and agile mindsets won't help that. In this sense I'm not sure what neither waterfall nor modern agile in its many variation means, just certainly it is always easy to laugh from the outside.
Personally, I haven't seen this for a while. Everybody I know that use contractors migrated into "let's measure the software after the fact, iterate the design, and auction on the measurement unity" that, of course, also fails all the time, but works much better. It's basically the "Scrum" people tend to use in-house with some added nonalignment of incentives to spice things up.
But I don't doubt a lot of places are still auctioning requirements. I just haven't seen them lately.
I agree large corporations still RFP and contract out this way. The vendors are now “agile” but deadlines and hard requirements are still baked into modern contracts. I can’t tell the difference between now and the bad old days.
I led an engineering team building validated systems that supported GxP processes in life sciences and found that a Shape Up-like process produced the best balance between delivering software that could be validated[1] and speed. I only came across it after several years of trial and error, but I was amazed at how it almost exactly described what we ended up doing.
A side effect is that work-life balance is amazing once you get this process working. If you know exactly how much work you need to do and when it needs to get done by for the next 4-6 weeks, then you can pace yourself much better. When you plan 6 weeks at a time, someone going out sick or taking some time off has less of an impact than when you plan 1 or 2 weeks at a time.
The customers and end-users know what to expect and when they'll get it; everyone gets more predictability in the process. Yes, it takes 1-2 weeks of planning at the outset, but we used that downtime for the engineering team to catch up on tech, build some experiments, work on side projects, or do research. (No refactoring since the validated process requires us to plan and document all code changes that make it into production)
I recommend it for any team that wants some sanity in the processes.
[0] https://basecamp.com/shapeup
[1] Validated in this context means that every feature, function, and infrastructure component can be traced back to a specification through the full lifecycle from design to install to post-install verification and operations.