In my view iterative development is far harder, maybe even impossible, to do at companies that don’t have the culture for it. Startups have it and most grow up with it but you’ll struggle to succeed with this approach at any big company. They may introduce Agile and bring in Agile coaches but they are just putting lipstick on the pig. That culture is set, I say this having seen it first hand.
As an example, and a little self promotion that I hope I won’t get downvoted for :) I have been working on a cross platform App that reads any article to you. It uses AI/ML models to convert the text to audio so you can listen on the go and maximize learning on that dead time on commutes.
This is a fairly complex thing to build, especially in a few months and to make it work cross platform. We now have a lot of features but this all happened feature by feature. Get one thing done, get it out, start on the next thing. If I tried this approach in a big team at a big company there is absolutely no way we would have this much done in a few months. The politics, the nonsense, etc.
If you want to check out the app you can try it here:
 "The One..."
The process of moving a medium-large businesses towards a more 'Agile' mindset for example, can be dreadfully slow with plenty of roadblocks throughout the process and that can cause some executives to want to go back to their previous ways.
Realistically speaking, some companies are better-off starting to implement some areas of the items listed above and in steps, add the other areas that are relevant to the success of the company. Not all areas of Design Thinking for example, will be applicable to all segments of the company, understanding which ones to add, when and the costs/benefits of not having the other areas is really what drives success
One suggestion: pause a bit more after the end of paragraphs, particularly after long paragraphs. It can help comprehension.
At some point down the road, it'd be nice to have the ability to adjust playback speed.
I don't think that's true at all. You absolutely can and should build an MVP of all these things. You certainly could release an MVP search engine (or even a browser-based game). You probably can't hope to gain traction at large with an MVP browser or operating system, but you can start using these things internally and dogfooding them.
IMO, iterative development isn't necessarily about releasing something fast to the public, it's about rapidly building a working version that you can interact with, test properly as a whole working system and show people, so you can quickly figure out what are the most obvious issues to address and improvements to make. You can test this software internally, show it to your friends, have a small private beta or even hire testers depending on your project and budget.
This is in contrast to a more traditional/naive development model where the team isn't even trying to have a working/testable version, and instead focuses on spending weeks/months building components separately, which they hope will come together into some grand vision at the end. This approach is terrible because things likely won't come together seamlessly. The different components can fit together poorly, effort is wasted on features that weren't actually good ideas in hindsight, etc. Software projects developed this way are often late, overly complex and poorly designed.
What I think the author should have used is a car. First, we design and test the suspension, then the wheels, then the brakes, etc until we have a working car from the ground up. Then we sell a lot of them.
The OS / game engine /car example is frightening because you have no intermediate value until the whole thing comes together. hiccups propegate.
However, an iterative approach would create a minumum feature version first.
A car with no suspension, 25 HP, and one seat. An OS with virtually no drivers, targetting single arch, bad scheduler, single filesystem, and single user.
These are, in aerospace analogies, testbeds in which you validate your long term vision while producing real things that can be sold, hyped, or used to look for improvements and bad assumptioms before the final product is produced.
I completely agree with your statement that markets become saturated with Min Viable Products ... but that is what forces iteration. I agree with the author (and this isn't new knowledge) that iteratively buidling a better thing by expanding a pre existing thing helps you capture and scale.
My opinion is this probably gets hard once that thing gets big, and starts to feel clunky, and thats where the big companaies do a "re-engineering" with a waterfall-like model. They can afford to take time to design soup-to-nuts because they are still selling the iterative MVP mutant that got them to dominance. Different strokes for different folks.
"MVP" tends to wind up meaning vastly different things to different people
who says you can't leverage what's out there to build any of these complex things? An MVP of a new browser might start out as a chrome extension, or a fork from chromium.
An MVP of a new search engine can leverage google, and bolt on some custom thing to the end of it
Tesla for example built the Roadster starting with leveraging a https://en.wikipedia.org/wiki/Lotus_Elise frame
‘Minimum Viable Product’ is not the same as a product that is ready to capture real market share, just a validation checkpoint(ideally one of many).
I worked on a safety critical system (among many others, but this one in particular) that was developed in an iterative fashion. The "MVP" was not sufficient for fielding the aircraft, it was sufficient for testing the aircraft. Each successive iteration was meant to be closer to what was needed for later testing stages, and finally a release that was good enough (not feature-complete, but complete enough) to allow for carrying passengers.
Most of the later additions were "nice-to-haves". They provided better reporting and recording of issues, better BITs (built-in-tests) which reduced maintenance effort, etc. Nothing essential for safety, but useful for improving overall quality-of-life issues for the operators and maintainers.
Each stage also ensured that we were building what was actually needed (reporting the right kind of things, reporting it in the correct way, etc.) before we spent millions of dollars and years of effort on the wrong thing.
And getting stakeholders, or users, in the loop and validating at each iteration is still worthwhile to make sure you actually build the right thing.
Has anyone seen it in real life - in the pure form?
I once worked at a place that "did waterfall", but a diagram of the process would have shown arrows in all directions (would we call these things salmon runs or something?) and if you needed to go backwards only that specific piece did so while everything else continued as normal. Unit testing, integration testing, system testing, etc was all present on day 1.
"Figure 7. Step 3: Attempt to do the job twice - the first result provides an early simulation of the final product."
1970 "Managing the Development of Large Software Systems"
"Figure 3 portrays the iterative relationship between successive development phases for this scheme. The ordering of steps is based on the following concept: that as each step progresses and the design is further detailed, there is an iteration with the preceding and succeeding steps but rarely with the more remote steps in the sequence. The virtue of all of this is that as the design proceeds the change process is scoped down to manageable limits. At any point in the design process after the requirements analysis is completed there exists a firm and closeup moving baseline to which to return in the event of unforeseen design difficulties. What we have is an effective fallback position that tends to maximize the extent of early work that is salvageable."
The same way you haven't seen waterfall in its pure form, you will likely never see Agile, Design Thinking, etc in its pure form. That is because we are looking at environments prone to change and in control of humans with variable degrees of knowledge and prone to change.
What you have seen with the diagram that shows arrows in all direction is simple, it went against your perceived view or the standard and that put you off. Or, the person just didn't know how what they were doing (the more likely scenario).
The ideal case presumes that everything is known when the sequential development is planned, which is what permits it to be optimized.
But in my n=1 experience, everything is not known, the development team is constantly discovering new information, requirements change, and the ideal plan never works out as originally envisioned, whether we tried to develop sequentially or not.
But the “ideal” plan is often like highly optimized code: Expensive to change. So it has its own cost that cannot be escaped. So I am left extremely cynical about the “ideal sequential development” approach.
It always seems at the outset to be cheaper and faster, but ends up being expensive and inflexible.
Now if you want something REALLY expensive, I give you “Scrumfall,” a plan where you combine the costs of iterative development with the inflexibility and attendant costs of fixed date, fixed scope, and attempt to optimize sequential development.
But hey, you get to wear an “agile” badge, even though “inspect and adapt” has no place in the process.
For a lot of things it doesn't matter, but for some it can be very difficult to change later on, especially if you need to maintain backwards compatibility.
>> So I am left extremely cynical about the “ideal sequential development”
Off course, in development/engineering work that has unknowns. But taking TFA example of building houses, most / all developers would do the foundations of all the houses first before moving on i.e. working sequentially.
2 important reasons are:
1) the project is around 6 years in total
2) the company has to fix any issues with the house for the first 2 years.
If they did all the foundations first, they’d have exactly the problem described in the article - no money till right at the end. Instead, they get money the entire way along and use that to finance the rest of the project. This isn’t an optimisation, iterative delivery is essential to their viability as a business.
They also eat the support costs on houses they build. That means they are repeatedly changing “features” in the houses to minimise those support costs. Just walking down 1 street you can see the evolution of the houses over time as they fixed various small issues with the original design. The houses look basically the same to a casual observer, but are quite stark to someone who lives in one of the versions. They are very obviously agile, and have a great system for inspecting and adapting. That doesnt happen on 2 week cycles, but it does happen at a pace appropriate for them.
In my environment, we “inspect and adapt” every two weeks, and we tend to do marketing launches a couple of times a year.
But SpaceX was not launching rockets into space every two weeks, so their “iterations” from the outside look more like most companies’ product launches, not like sprints or whatever people call their iterations that happen on a fortnightly or monthly scale.
That's not true. In the "ideal", they both take the same time. Why? Because in the ideal they both include exactly the same amount of work (assuming your testing story isn't 5 months of manual test execution). If you have a sane system testing setup, they're exactly the same.
But the fact is, you never have the ideal. And that's why you use iterative methods. You will almost always design the wrong thing (at least in part). You will almost always have a section of code that's harder to implement than originally anticipated (or whose initial implementation impedes other work, requiring a partial rewrite).
Iterative methods discover these issues earlier. In a sequential (Waterfall) development model, you only discover these problems in the late stages, far after you've initially designed and developed. This requires you to go back and redo a lot of stuff, or you put in patches and compromise on quality, or you accept that it's a failure but ship it anyways.
Now, if your test story isn't sane and it's all manual testing, then sure, iterative methods will by necessity be slower because you take months to test each iteration. So fix the test story so that it's as automatic and fast as possible (without compromising the test quality). At that point, sequential is only best if you have a perfect team that never fucks up, and a perfect customer who never asks for the wrong thing. Since we all know that those things never happen (in large scale projects ), don't make that assumption (you do know what assuming does, right?) and stick with iterative methods.
 A thing that's often forgotten in these discussions is that Royce's paper was about large scale engineering tasks. Not small stuff. I will concede that a Waterfall-esque approach can work fine for a short-run (less than one quarter), small-scale project (especially if it's in a well-understood domain). But in anything past that 3-6 month mark or on any poorly understood domain, sequential is the worst thing you could do to your team and your customer.
My other pet annoyance is management still seems to think that adding more resource to a project makes it faster. Especially "cheaper" resources working in a different country.
I'd say that the biggest failures I've seen have been due to having a specification, and the bigger the project the more important it is not to have one.
As the quote goes: A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over, beginning with a working simple system.
You can, and should, have specifications for the next ~two weeks of changes that you want to do. These should be verifiable at a user-facing level. But after that you iterate. Trying to specify further in advance or at more detail than what's user-visible is a recipe for disaster.
And yet we flew to the moon. And have airplanes, and and and.
A lot of agile is just an excuse for software "engineering" being in the throw shit at the wall and see what sticks stage of evolution. Construction went through the same, software has freemasons now. But no engineering whatsoever. Example: People still write their own date conversions (and fuck it up).
Those large scale projects were not done in a Waterfall fashion, they were done iteratively with models/simulations/prototypes produced along the way.
models/simulations/prototypes != iteratively
We do, but not by designing from scratch. We achieved those things by incrementally extending existing designs and verifying that the newer, more complex designs had the desired properties.
> A lot of agile is just an excuse for software "engineering" being in the throw shit at the wall and see what sticks stage of evolution. Construction went through the same, software has freemasons now. But no engineering whatsoever.
The constraints that apply to software are very different from those that apply to construction. What if "throw shit at the wall and see what sticks" really is a better way of making software than "engineering"?
> Example: People still write their own date conversions (and fuck it up).
Indeed they do. Do you find this is more common in projects that specify and design up front or in projects that don't?
What if it isn't.
We completely agree here - are you saying you've never encountered the "we're agile, no need for specs" mindset?
I've found the "we're "agile" but going to have a huge up-front spec" mindset far far more damaging.
With certain projects, the overriding risk is that you're building something that nobody wants. In this case, Agile makes a lot of sense, and specifications are a waste of time until you've validated customer demand.
For other projects, you're very confident that demand exists for the product you're building, and specifications start to make a lot of sense.
Big caveat though - you can get this assessment very very wrong. "Demand" can be a very open-ended concept (e.g. in the entertainment industry), but that doesn't necessarily translate to user satisfaction. No Man's Sky, for example, probably should have been put in front of players a lot earlier. That means building to a looser spec in shorter cycles with more iteration (more "Agile", for want of a better term).
This is why product managers are so important - they need to make this judgment. It's a difficult balancing act to juggle "minimum features and user testing", "grand product vision" and "efficient and manageable development cycles".
Backlog items in most processes called agile are specs, and most variations include an activity of assuring that they meet whatever quality standards are in place prior to being eligible for development.
It's more common for places claiming to be agile to not have any integrated spec for what has been built than to not have specs of each to-be-built unit, AFAICT.
I've had some horrific experiences where multiple developers worked on the same features (without specs) while stepping on each others toes.
Technical talent is a the-odds-are-good-but-the-goods-are-odd kind of deal - at least outside Silly Valley in the developing world. You can hire a huge sum-total-IQ but you can't hire people who can work independently and collaborate.
I mean, I guess it's more agile than one big waterfall, but it's still not embracing the principles of Agile (like involving developers in product decisions, or letting developers assign deadlines). It's still locked in the worldview of "serious people spec the product, then the nerds push the nerdbuttons to build it".
You should use iterative development only on projects that you want to succeed."
-- Martin Fowler, UML Distilled
I'd argue that often far from being helpful. Some of the best tech bosses I worked under would've shove a person out of the window simply for uttering a word "iteration."
One told about that approach as - "trying to construct an airplane by making changes utill it stops crashing on takeoff"
It a good to have a distinction in between software being iterated, and your business activities - a lot of people there mistake the two.
First let's build something that flies.
Now let's build something that can fly for more than a minute.
Now let's build something that flies for more than a minute and can also be steered.
It's pretty much the same nowadays: engineers keep building new generations of airplanes which are heavily tested snapshots of current "trunk" branch.
> It's pretty much the same nowadays:
> engineers keep building new generations
> of airplanes which are heavily tested
> snapshots of current "trunk" branch.
Many mature industries like commercial aircraft manufacturing are "convergent" meaning that their goal is providing the customer what they're expecting. Success is measured by "meeting the spec", beating the competition using some combination of price, reliability, time-to-market, and leveraging established status and reputation in the market. I think your SW development analogy of version control branches applies to convergent industries.
There are other industries which are "emergent." These types of industries attempt to create markets with entirely new and unexpected products. They don't face competition, instead their success hinges on developing the market for the product.
The rules are entirely different for each of these two categories.
Can't have a great product launch without understanding the customer and that includes acquiring them through sales & marketing. Iterative development is almost a given at this point.
Also, there are budgeting committees that tend not to think of themselves as agile, or take kindly to being told what to do or how to operate, and they put constraints on the whole thing.
Anyway, there were lots and lots of iterative processes before Agile was even being imagined. I'm surprised that there are software engineering books that don't realise this...
 - http://wiki.c2.com/?HistoryOfIterative
 - https://en.wikipedia.org/wiki/Spiral_model
 - https://en.wikipedia.org/wiki/Rapid_application_development
I remembered. The author is Mark Lorenz. Book is here: https://books.google.co.jp/books/about/Object_oriented_Softw...