> In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this
group the ratios between best and worst performances averaged
about 10:1 on productivity measurements and an amazing 5:1 on
program speed and space measurements! In short the $20,000/year
programmer may well be 10 times as productive as the
Brooks did mention some other really useful concepts that are still very valid today: 'No silver bullet' and 'The second-system effect'. These should have been mentioned as well.
Many classics like this are so good because the author is among the first to observe new phenomena. They can write down what they have learned without cultural narratives that distort reality.
The hardest lesson in the book is the chapter 11 "Plan to Throw One Away". I have never seen this not to be the case in large systems. You must design the system to be rewritten. Accepting this is still a taboo.
> In most projects, the first system built is barely usable. It may
be too slow, too big, awkward to use, or all three. There is no
alternative but to start again, smarting but smarter, and build a
redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done
piece-by-piece. But all large-system experience shows that it will
be done.2 Where a new system concept or new technology is used,
one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time.
>The management question, therefore, is not whether to build
a pilot system and throw it away. You will do that. The only
question is whether to plan in advance to build a throwaway, or
to promise to deliver the throwaway to customers. Seen this way,
the answer is much clearer. Delivering that throwaway to custom-
ers buys time, but it does so only at the cost of agony for the user,
distraction for the builders while they do the redesign, and a bad
reputation for the product that the best redesign will find hard to
>Hence plan to throw one away; you will, anyhow.
It's not that clear! Perhaps you will only learn the lessons of why the throwaway version sucked by delivering it to customers. And it might buy you a lot of time, perhaps multiples of the initial time to get the pilot working.
Brooks has a point, but it may have been more true in the days of shrink-wrap software than SaaS and continuous updates.
I'd argue that it's more important to build an institution capable of retaining knowledge. The worst results are when the pilot system sucks and a whole new team is brought in to build the second version. They'll inevitably start building their pilot version, which sucks, and repeat until management gets sick of funding new teams for the same project (I feel this happens disproportionately in banking for some reason). Instead, you need to keep substantially the same team around to fix their mistakes in the second version.
On the contrary, aren't factors like Microservices and all the constant prototyping at large companies encompass this? I feel in some ways they may have embraced the principle too much, to the point where prod doesn't have a properly stable solution to support while they are re-writing an experimetal "improvement".
You may well end up throwing away some or many of your early microservices as you grow and the company better understands what it wants to build.
Or, you can take another approach (which i have witnessed first hand), throw it all away and go back to a regular, well designed monolith and getting very significant performance improvements by doing things like SQL joins and transactions instead of putting events in a message queue and paying network traffic penalties.
I also think that if we summarized all the great books ever written and boiled them down to one phrase, it would be "read more books".
In the case of this article, there are little details that suggest it was written by a person, like randomly choosing to abbreviate the book to T-MMM. But still, it's started happening for me that an article can feel "GPT-ish".
The disregard for the fact that some engineers are more productive than others originates from companies' processes and planning. Projects are usually estimated without considering who will be working on the project and individuals are compressed to person-weeks. I have experienced it myself and read texts describing this issue in the same terms.
It doesn't really help that Go was designed in such a company, but saying that it was designed to mitigate this disparity is saying that the best predictor of an engineer's productivity is the number of LOC cranked out. I don't think that is the case, neither in principle nor in Google particularly.
Much better predictors of productivity are effective communication and conceptual integrity of the design, as the linked article points out. It doesn't really help that you use brilliant language if, 6 months in, you've realized you're building the wrong thing, or you build it in the wrong way.
IMO, someone making code that abuses special features to the point it is difficult for other members (including future members) of the team to read is the definition of a negative-X programmer. Unless they're working solo, of course.
Also, like wavesbelow mentioned in his great comment, the "10x" doesn't come from coding prowess alone: it starts long before that, with the process and planning.
Go was designed to maximise the number of 'any developers' that could join a project, i.e. 1x developers.
Yeah maybe, but I'm certainly not at that level.
If a 10x programmer exposes foo() and bar() methods to me, I'll call foo().bar() and not realise I've made a mistake, or that I've violated some invariant that's only in his head, not in the low-level code itself.
I can see shaving 1-3 people off of Brook's ideal team (we don't need a secretary to type anymore, PM's that organize projects at the behest of a technical leader are effective).
What I have seen over and over is, whoever writes the most code tends to get the most power in an org. Whoever is delivering features quickly gets power. This kind of works, but these coders frequently leave poorly thought out architecture that is hard to extend.
Arthur Bloch's Murphy's Law and Other Reasons Why Things go ƃuoɹʍ is a compendium of engineering and organisational dictums. Of itself it provides relatively little insight, but many of the principles themselves trace to more substantive works, including Parkinson's Laws, the Peter Principle, and many, many others:
Bloch himself was working off a number of earlier compilations, several already popular in the high-tech industry (programming, weapons design, biotech, etc.), as well as other fields. I'd tracked those down at one point but seem to have lost those references. Bloch cites some, though not all, his sources in this and subsequent "Murphy's Law" books.
"Gamesmanship" and "Systemantics" cover similar ground:
See also Richard I. Cook's "How Complex Systems Fail":
Charles Perrow explores organisational foundations of failure in Normal Accidents and The Next Catastrophe. To an extent Joseph Tainter and Jerard Diamond's books (particular each author's independent Collapse titles) look into the dynamic at much greater depth.
For programming, MMM (Brooks), PeopleWare (Demarco & Lister), The Psychology of Computer Programming (Weinberg), Code Complete (McConnell), and a substantial literature on quality assessment and practices emerged in the 1970s -- 1990s. The bibliographies of the above books, as well as citations of them, should provide an ample set of references for further reading.
Human organisations, whether governmental, commercial, educational, religious, military, charitable, social, or any other principle focus, tend to have and exhibit strongly similar patterns.
There is of course also domain-specific knowledge, but even much of that almost always proves more general on closer examination, with much of the distinction being of labeling and language rather than behaviour and phenomena.
I would argue that it should be the bible for every manager that are managing engineers. They should study it thoroughly and all the literature it touches and mentions.
The best book I have read on software development by far.
Reminds me of Adam Smith's writings about the nascent factory economy, and perhaps ancient philosophers as well.
- the field has been growing so much that at any point in time the majority of devs will be relatively new. A field that doubles every three years will never have more than 50% of people with more than 3 years of experience, obviously.
- I have no numbers to back this up, but intuitively it feels like more devs "drop out" of software development than in other professions. Many become managers, others take up various non-development projects or retire altogether. This makes it so that even less senior people are available.
It doesn't help that our industry has been incredibly fast-moving compared to most industries, and our interview process is geared toward either being a fresh grad who's taken and algos class recently or knowing all the latest frameworks. Not a lot of places are interviewing on the sorts of experience you gain over a couple of decades in the industry -- which, to be fair, is often more intangible and harder to interview on.
It doesn’t really help that young people always feel like they know better either.
Even if there’s an experienced architect, it doesn’t help if they’re surrounded by 10 rookies. Nobody has the time to guide all of them.
I've actually ended up in a tech job that hits those a bit, so I am doing a bit better these days at least.
Of course, once the Great Recession hit, myself and many others that I graduated with either left the profession for other employment or simply never found jobs to begin with (that was me). Luckily I was already programming in grad school, and what was something I was filling my time with while looking for an architecture job became a new profession and I never looked back. Given all the comparative benefits of Software Engineering over Architecture, it's been a good move for me.
When you work, you work on building things that don't have a specific end goal, without a specific deadline, where the success criteria are also not purely academic. To the extent your problem is technical, you may be exploring the cave for a heck of a long time with no light.
You end up fighting with organizational issues as well.
With a bridge, the model tells you about the forces in the structure, the component tolerances, and the likely behaviour under various stressful and extreme conditions.
Same with EE. Commercial board design uses schematic simulation, automated layout, and loading/transient emulation. You can't do modern commercial PC motherboard design without modelling software. (Well - you can. But it'll take far longer and be far less reliable.)
Software dev is more a case of nailing things together until they probably mostly sort-of work.
There's some guild lore - which changes fairly regularly - but no formal modelling. Realistically it's somewhat informed guesswork based on the current lore, mostly tested by trial and error.
I think part of the problem is that most of our "raw materials" like nginx and postgres are so robust that you can build really quite large projects without having to do any modeling or other big planning. Things that have millions of users can still be more-or-less slapped together from default parts.
Building a bridge is something where external participants cannot really tell you much about what it should look like. They know you can't just double the number of lanes or add a train track as you're doing it.
With a software project, people can ask for all sorts of stuff, and because it's soft, some engineer will say "ok let me see if I can look at that for you".
Not only that, if you write your software so that it's rigid, that's bad code! You need it to be flexible so that you can cater for future concerns.
Softness also means you have to keep up with trends. You can write your next website with a newer, fancier js framework, and people can make these new frameworks are just sitting at home at their desk. They aren't mixing new concretes.
While there are some parts of new software that are more similar than not, and these are more 'engineering like'. The majority is not.
Evidenced by; if you did a 'diff' between plans of a bridge you would get a limited number of differences. When you do a diff between an os, browser, slack, the code that runs this site you get a lot more differences. You need to consider each and every detail. This is not 'big-picture'. Big pictures do not construct things.
Sure, there are lots of similar projects and maybe you can apply a more traditional engineering approach to those.
It is interesting to note, LLMS are bringing the similarities out. You can ask for something and it can code up something from something it has seen that is similar. But, as anyone who has used them, will know, it's only partially correct in its interpretation of that simimilarity.
We can't just assume the people we hire will avoid the eventualities. This is why we need process, to force people into working in ways that avoid as many of the problems as possible. But then the problem becomes getting people to do the process correctly.
I believe the one thing that could transform the industry most significantly is better management. Most managers and team leads I have worked with, even if they've heard of these books, do not act in ways to prevent their problems. They fall into a rut, because they are not following a process.
It gets even worse when they claim to be following a process but aren't. There's loads of business improvement processes out there, but most are paid lip service. Then people get jaded at the process rather than the person or leadership team who clearly wasn't doing it.
This is all the more gobstopping an oversight when you realise that Smith not only know of James Watt and his steam engine, but was personally acquainted with Watt, personally arranged for him to have a position at the University of Glasgow, and that that was specifically to work and improve the University's own steam engine. Watt remained at that post for a decade or more if memory serves, much of that prior to the publication of Wealth in 1776.
A group of 8 persons provides the same amount of work as 4 individuals.
Ringelmann effect - https://en.wikipedia.org/wiki/Ringelmann_effect
Social loafing - https://en.wikipedia.org/wiki/Social_loafing
Doesn't it increase by n^2? as per the picture with the graphs?
In reality, there is very often opportunity to take 1 project with ~3 engineers, and break it into 2 smaller projects each with ~3 engineers and run them mostly in parallel. Do your best to isolate those projects, but have a point of contact (EM, PM, tech lead) between the two teams to coordinate whatever dependencies are unavoidable, etc.
You'll notice, that this is just a smaller microcosm of how every company is actually structured anyway. There's still diminishing returns, but most people on the team never need to communicate directly with people outside of their project.
"Division of Labor: There’s a limit to how effectively a task can be partitioned among multiple workers. Some tasks simply cannot be divided because of their sequential nature, and for those that can be divided, the division itself can introduce extra work, such as integration and testing of the different parts."
Just replace 'workers' with 'teams'.
Also, very Norm MacDonald (RIP).
Brooks says there can be some gains, but no silver bullet :)
- As a Testing Agent that learns how the system behave and how to test it as the developers interact with the agent.
- As a tutor, junior can learn from the knowledge of experts by interacting with the AI.
- For "automatic" programming when the problem is characterized by few parameters, when there are many known solutions and good knowledge to select the correct solution.
So far I've read about tutoring and automatic programming, but I haven't read about how to use AI to learn about the system and generate tests.
Gold! My week is made, no matter how many deadlines I blow past.
If the only important information in the book was its main point, it wouldn't have needed to be a book. It could have been a leaflet. Or a bumper sticker - those can be very catchy.
It's worth reading the book for all the other words it contains.
The book most definitely helps tech workers.
I've had developers who treated it like a job, did it, went home, and weren't interested in the Mythical Man Month (but intuitively knew some of these principles).