Hacker News new | past | comments | ask | show | jobs | submit login
The Bluffers Guide to the Mythical Man Month (codemanship.wordpress.com)
164 points by JazCE on Nov 20, 2023 | hide | past | favorite | 89 comments



This is a very shallow summary of the Mythical Man Month, and it leaves out at least one concept that the book is most (in)famous for: the '10 times programmer'. An excerpt:

> In one of their studies, Sackman, Erikson, and Grant were measuring performances of a group of experienced programmers. Within just this group the ratios between best and worst performances averaged about 10:1 on productivity measurements and an amazing 5:1 on program speed and space measurements! In short the $20,000/year programmer may well be 10 times as productive as the $10,000/year one.

Brooks did mention some other really useful concepts that are still very valid today: 'No silver bullet' and 'The second-system effect'. These should have been mentioned as well.


I agree. Mythical Man Month is a book to read. It's clear, interesting, and has better writing than summaries.

Many classics like this are so good because the author is among the first to observe new phenomena. They can write down what they have learned without cultural narratives that distort reality.

The hardest lesson in the book is the chapter 11 "Plan to Throw One Away". I have never seen this not to be the case in large systems. You must design the system to be rewritten. Accepting this is still a taboo.

> In most projects, the first system built is barely usable. It may be too slow, too big, awkward to use, or all three. There is no alternative but to start again, smarting but smarter, and build a redesigned version in which these problems are solved. The discard and redesign may be done in one lump, or it may be done piece-by-piece. But all large-system experience shows that it will be done.2 Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time.

>The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers. Seen this way, the answer is much clearer. Delivering that throwaway to custom- ers buys time, but it does so only at the cost of agony for the user, distraction for the builders while they do the redesign, and a bad reputation for the product that the best redesign will find hard to live down.

>Hence plan to throw one away; you will, anyhow.


> The only question is whether to plan in advance to build a throwaway, or to promise to deliver the throwaway to customers. Seen this way, the answer is much clearer.

It's not that clear! Perhaps you will only learn the lessons of why the throwaway version sucked by delivering it to customers. And it might buy you a lot of time, perhaps multiples of the initial time to get the pilot working.

Brooks has a point, but it may have been more true in the days of shrink-wrap software than SaaS and continuous updates.

I'd argue that it's more important to build an institution capable of retaining knowledge. The worst results are when the pilot system sucks and a whole new team is brought in to build the second version. They'll inevitably start building their pilot version, which sucks, and repeat until management gets sick of funding new teams for the same project (I feel this happens disproportionately in banking for some reason). Instead, you need to keep substantially the same team around to fix their mistakes in the second version.


>The hardest lesson in the book is the chapter 11 "Plan to Throw One Away"

On the contrary, aren't factors like Microservices and all the constant prototyping at large companies encompass this? I feel in some ways they may have embraced the principle too much, to the point where prod doesn't have a properly stable solution to support while they are re-writing an experimetal "improvement".


Part of the problem is you don't necessarily understand the problem domain very well when you're starting out. There are plenty of examples of microservice architecture failures due to incorrectly dividing services- either causing too much internal communication for performance acceptance criteria, or struggling with service coordination when performing changes that impact multiple relationships.

You may well end up throwing away some or many of your early microservices as you grow and the company better understands what it wants to build.

Or, you can take another approach (which i have witnessed first hand), throw it all away and go back to a regular, well designed monolith and getting very significant performance improvements by doing things like SQL joins and transactions instead of putting events in a message queue and paying network traffic penalties.


Well, this was a bigger issue when this book was written and software was typically developed on disks that couldn't be updated after sale. In the SaaS world, I think deploying your MVP and then rewriting/improving while you have customers running into real world feedback and pain points, and also generating revenue to validate market demand, is strictly preferable to taking an extra cycle and not be in market.


I understand the drive to consume book summaries instead of books themselves. There are way too many books, life moves fast, we need results yesterday.

I also think that if we summarized all the great books ever written and boiled them down to one phrase, it would be "read more books".


Such great writing. Clear, compelling, and entertaining.


Has anyone else started feeling the following? In the last year or so, reading a banal, superficial, and platitudinous summary of something has, for me, become inflected with this feeling that it was written by GPT.

In the case of this article, there are little details that suggest it was written by a person, like randomly choosing to abbreviate the book to T-MMM. But still, it's started happening for me that an article can feel "GPT-ish".


I’m just glad they read it.


And by 20K/yr programmer he likely means more experienced. Programmers tend to interpret this 10X as rockstar extremely prolific code programmer. In my experience the 10X programmer doesn't do things that impede the group or has enough experience to reduce the solution space to something manageable. Writing code is only part of the equation. Getting to a solution that works well cuts the time and lays the ground work for iterations and improvements. People the spew code that generally works are not as helpful.


Despite many attempts to ignore this reality, entire languages like Go were invented to try mitigate this disparity.


I sympathize with your sentiment, but I don't think Go has anything to do with this.

The disregard for the fact that some engineers are more productive than others originates from companies' processes and planning. Projects are usually estimated without considering who will be working on the project and individuals are compressed to person-weeks. I have experienced it myself and read texts describing this issue in the same terms[1].

It doesn't really help that Go was designed in such a company, but saying that it was designed to mitigate this disparity is saying that the best predictor of an engineer's productivity is the number of LOC cranked out. I don't think that is the case, neither in principle nor in Google particularly.

Much better predictors of productivity are effective communication and conceptual integrity of the design, as the linked article points out. It doesn't really help that you use brilliant language if, 6 months in, you've realized you're building the wrong thing, or you build it in the wrong way.

1. https://danluu.com/people-matter


In the end, Go is indeed effective at making engineers avoid arcane and difficult code, but doing the "difficult" stuff that's not available in Go is definitely not how the most productive engineers are productive.

IMO, someone making code that abuses special features to the point it is difficult for other members (including future members) of the team to read is the definition of a negative-X programmer. Unless they're working solo, of course.

Also, like wavesbelow mentioned in his great comment, the "10x" doesn't come from coding prowess alone: it starts long before that, with the process and planning.


This is starkly opposed to how Go is marketed, especially here on HN.

Go was designed to maximise the number of 'any developers' that could join a project, i.e. 1x developers.


Isn't this exactly what op was saying?


Thank you. I had it backwards.


I get the assomption that with go you will avoid 0.2 programmers and make everyone a 1x programmer. But for me, a 10x programmer can generate complex, well engineered systems with a reasonable structure, not by writing everything in low level, super hard code ? From the level a 10x programmer might operate I'm not sure the programming language would be that important ? Expressivity is nice and some languages are better than others, but from my experience you can write nice artifacts from plain tools?


> a 10x programmer can generate complex, well engineered systems with a reasonable structure, not by writing everything in low level, super hard code ?

Yeah maybe, but I'm certainly not at that level.

If a 10x programmer exposes foo() and bar() methods to me, I'll call foo().bar() and not realise I've made a mistake, or that I've violated some invariant that's only in his head, not in the low-level code itself.


Many things are invented to solve problems that don't actually exist.


I am surprised that there's less mention of Brook's proposed team organization, a team of 10 led by a surgeon. The team is supposed to work on a single project. Every org I have seen has much smaller teams with much more individual responsibility and attendant coordination problems.

I can see shaving 1-3 people off of Brook's ideal team (we don't need a secretary to type anymore, PM's that organize projects at the behest of a technical leader are effective).

What I have seen over and over is, whoever writes the most code tends to get the most power in an org. Whoever is delivering features quickly gets power. This kind of works, but these coders frequently leave poorly thought out architecture that is hard to extend.


Classic "You get promoted until you are incompetent" situation


That’s another book: the Peter Principle, by Laurence J. Peter. It’s also in my “books you need to read if you’re going to work on a development team” list.


Intrigued to hear what else is on your list?


My go-tos are the mythical man month and Peopleware:productive projects and teams. There’s the Peter principle too. On more technical (slightly) topics, I like the Pragmatic Programmer. I also like people to be aware of re-engineering concepts, so Hammer and Champy’s Re-engineering the Corporation is also good.


A good set.

Arthur Bloch's Murphy's Law and Other Reasons Why Things go ƃuoɹʍ is a compendium of engineering and organisational dictums. Of itself it provides relatively little insight, but many of the principles themselves trace to more substantive works, including Parkinson's Laws, the Peter Principle, and many, many others:

<https://archive.org/details/murphyslawotherr0000bloc>

Bloch himself was working off a number of earlier compilations, several already popular in the high-tech industry (programming, weapons design, biotech, etc.), as well as other fields. I'd tracked those down at one point but seem to have lost those references. Bloch cites some, though not all, his sources in this and subsequent "Murphy's Law" books.

"Gamesmanship" and "Systemantics" cover similar ground:

<https://en.m.wikipedia.org/wiki/Gamesmanship>

<https://en.m.wikipedia.org/wiki/Systemantics>

See also Richard I. Cook's "How Complex Systems Fail": <http://web.mit.edu/2.75/resources/random/How%20Complex%20Sys...>

Charles Perrow explores organisational foundations of failure in Normal Accidents and The Next Catastrophe. To an extent Joseph Tainter and Jerard Diamond's books (particular each author's independent Collapse titles) look into the dynamic at much greater depth.

For programming, MMM (Brooks), PeopleWare (Demarco & Lister), The Psychology of Computer Programming (Weinberg), Code Complete (McConnell), and a substantial literature on quality assessment and practices emerged in the 1970s -- 1990s. The bibliographies of the above books, as well as citations of them, should provide an ample set of references for further reading.


You should also read the Parkinson's law. It's more about bureaucratic systems but is not any org such a system?


Thanks for the reco!


Not only that, but there is a threshold where delivering features more quickly may indicate poorly thought-out, hard-to-maintain code. So you really need to have an experienced developer as a supervisor to assess quality of implementation, and not just look at the quantity of features delivered.


While Brooks' book is nominally about software processes, in my career of over 40 years, I found it to be applicable to just about every engineering discipline to which I played a part. I read this book back in the 70's and then every 10 years or so and I never failed to learn something new from it. I just wish the managers and companies I worked for would have applied these lessons.


I'm a bit of a broken record on the idea, but I'm fairly convinced that much of what we think is unique to software is not that unique. Coordinating work with people is hard, pretty much period.


I think we forget how much software engineering should be influenced by other engineering disciplines, rather than the other way around – I think there are plenty of cautionary tales of "move fast, break things" physical things startups.


Systems in general have similarities.

Human organisations, whether governmental, commercial, educational, religious, military, charitable, social, or any other principle focus, tend to have and exhibit strongly similar patterns.

There is of course also domain-specific knowledge, but even much of that almost always proves more general on closer examination, with much of the distinction being of labeling and language rather than behaviour and phenomena.


Yup, this book is always in the must read list for software developers but I've barely met any product/project/program managers that have even heard of it. They're often the people that would gain the most insight from the topics in the book.


Every engineer would benefit from it too.


The Mythical Man Month is great. What makes it great, is that it circles around how important everything but code is, in any professional software development progression.

I would argue that it should be the bible for every manager that are managing engineers. They should study it thoroughly and all the literature it touches and mentions.

The best book I have read on software development by far.


When I read it, I was saddened that most of the fundamental errors were still happening at my workplace decades after it was written.


It's always a surprise to know how far back good ideas actually go. Brooks figured out all this stuff in the 70s, pretty much as soon as it was possible for someone to have done this type of work and written a book about it.

Reminds me of Adam Smith's writings about the nascent factory economy, and perhaps ancient philosophers as well.


The article also points out that programming, even more than other professions, suffers from a lack of available "old people in the trenches" who can pass on these ideas to newcomers. This is because:

- the field has been growing so much that at any point in time the majority of devs will be relatively new. A field that doubles every three years will never have more than 50% of people with more than 3 years of experience, obviously.

- I have no numbers to back this up, but intuitively it feels like more devs "drop out" of software development than in other professions. Many become managers, others take up various non-development projects or retire altogether. This makes it so that even less senior people are available.


I think a big part of the "old people in the trenches" problem is that tech has historically had an "up or out" mentality. It seems like most of the older engineers I know either get pushed out or end up as principal engineers where they aren't routinely interacting with the younger people. I've encountered a few people who have managed to avoid this, but it's rare.

It doesn't help that our industry has been incredibly fast-moving compared to most industries, and our interview process is geared toward either being a fresh grad who's taken and algos class recently or knowing all the latest frameworks. Not a lot of places are interviewing on the sorts of experience you gain over a couple of decades in the industry -- which, to be fair, is often more intangible and harder to interview on.


> I know either get pushed out or end up as principal engineers where they aren't routinely interacting with the younger people.

It doesn’t really help that young people always feel like they know better either.

Even if there’s an experienced architect, it doesn’t help if they’re surrounded by 10 rookies. Nobody has the time to guide all of them.


In my experience rookies/interns/jr devs are a pleasure to work with: knowledge sponges. It's a small subset of ones becoming sr devs that can be difficult.


Yeah I also get this feeling that a lot of junior devs decide they don't really want to do it anymore. It's hard to get numbers on it because they will still have a related job title.


Almost 30 years ago I was a freshman cs student and a prof rolled out data that said by the time we were 30 years old, most if us would have already left the field. I can tell you that almost no one I graduated with is still working in tech, and not because we retired early off stock. It’s a challenging field for so many reasons.


Based on my own experience, I don't find it hard to believe that so many want out, but I do find it hard to believe that so many get out and so quickly. These days I find myself wishing I had gone the route of physician's assistant so I could do some tangible, socially-needed work that still pays the bills.

I've actually ended up in a tech job that hits those a bit, so I am doing a bit better these days at least.


It would be interesting to see how that compares with other professions. Do carpenters or dentists leave for other fields at the same rate? How about other types of engineers?


I can only speak of my experience having gone to school for Architecture. They told us that with each recession, the profession loses a generation of Architects. I was wrapping up grad school just before the Great Recession, and professors were surprised that a relatively greater percentage of graduates had stayed in the field due to continued economic prosperity since the mid 90s and the dot com boom hadn't impacted the Architecture field all that much.

Of course, once the Great Recession hit, myself and many others that I graduated with either left the profession for other employment or simply never found jobs to begin with (that was me). Luckily I was already programming in grad school, and what was something I was filling my time with while looking for an architecture job became a new profession and I never looked back. Given all the comparative benefits of Software Engineering over Architecture, it's been a good move for me.


Yeah, for the whole my career in tech, except the first year I had such feeling. Don't want to do it but couldn't leave this golden cage of enterprise bulshit development.


I think the problem is that Computer Engineering is still not an engrained concept in universities. Sure, there are courses where some concepts are taught, but it's difficult if not impossible to tailor your degree towards being a real engineer.


Side note; I suspect you mean software enginee? Computer engineering is absolutely a degree and concerns the engineering of a computer. Starts off like an EE, ends up taking classes like 'microprocessor design' in place of power systems and the like.


This is the thing. A degree is closed-ended. You build things that can be done within a deadline, and have been done within that deadline, by hundreds of groups before you. You study well known problems that are described in many ways in various sources.

When you work, you work on building things that don't have a specific end goal, without a specific deadline, where the success criteria are also not purely academic. To the extent your problem is technical, you may be exploring the cave for a heck of a long time with no light.

You end up fighting with organizational issues as well.


Thats not the only problem. In most engineering you build a mathematical model first, play with it until it's right, and only build the product when you're fairly sure it's going to work.

With a bridge, the model tells you about the forces in the structure, the component tolerances, and the likely behaviour under various stressful and extreme conditions.

Same with EE. Commercial board design uses schematic simulation, automated layout, and loading/transient emulation. You can't do modern commercial PC motherboard design without modelling software. (Well - you can. But it'll take far longer and be far less reliable.)

Software dev is more a case of nailing things together until they probably mostly sort-of work.

There's some guild lore - which changes fairly regularly - but no formal modelling. Realistically it's somewhat informed guesswork based on the current lore, mostly tested by trial and error.


That kind of engineering is surely done for "big" projects but I'm pretty sure my contractor didn't do any FEA modeling when he put up my shed. He knows from experience that normal wooden beams and some steel bolts will be fine.

I think part of the problem is that most of our "raw materials" like nginx and postgres are so robust that you can build really quite large projects without having to do any modeling or other big planning. Things that have millions of users can still be more-or-less slapped together from default parts.


Then how do civil engineers get their degrees?


At university, like everyone else? Sit there, study structures, materials, etc. Get a job, get a charter...


So why can't we have a computer engineering degree as you say?


It's the softness of software that makes it hard.

Building a bridge is something where external participants cannot really tell you much about what it should look like. They know you can't just double the number of lanes or add a train track as you're doing it.

With a software project, people can ask for all sorts of stuff, and because it's soft, some engineer will say "ok let me see if I can look at that for you".

Not only that, if you write your software so that it's rigid, that's bad code! You need it to be flexible so that you can cater for future concerns.

Softness also means you have to keep up with trends. You can write your next website with a newer, fancier js framework, and people can make these new frameworks are just sitting at home at their desk. They aren't mixing new concretes.


Mainly because, every bridge is more like every other bridge than not. Every skyscraper is more like others than not.

While there are some parts of new software that are more similar than not, and these are more 'engineering like'. The majority is not.

Evidenced by; if you did a 'diff' between plans of a bridge you would get a limited number of differences. When you do a diff between an os, browser, slack, the code that runs this site you get a lot more differences. You need to consider each and every detail. This is not 'big-picture'. Big pictures do not construct things.

Sure, there are lots of similar projects and maybe you can apply a more traditional engineering approach to those.

It is interesting to note, LLMS are bringing the similarities out. You can ask for something and it can code up something from something it has seen that is similar. But, as anyone who has used them, will know, it's only partially correct in its interpretation of that simimilarity.


Computer engineering degrees already exist, but they're much closer to the hardware.


I'd call them eventualities more than ideas. Do X thing in Y way and certain things are going to result every time. Somebody finally puts that in a book. But most people are either ignorant of it, forget it, or ignore it.

We can't just assume the people we hire will avoid the eventualities. This is why we need process, to force people into working in ways that avoid as many of the problems as possible. But then the problem becomes getting people to do the process correctly.

I believe the one thing that could transform the industry most significantly is better management. Most managers and team leads I have worked with, even if they've heard of these books, do not act in ways to prevent their problems. They fall into a rut, because they are not following a process.

It gets even worse when they claim to be following a process but aren't. There's loads of business improvement processes out there, but most are paid lip service. Then people get jaded at the process rather than the person or leadership team who clearly wasn't doing it.


Since you bring up Smith, one of the absolutely glaring oversights of Wealth of Nations is that it utterly fails to grasp or note the role steam power would play in the next century of increasing automation, factory-system development, and transportation within England and the economies of Europe and North America especially.

This is all the more gobstopping an oversight when you realise that Smith not only know of James Watt and his steam engine, but was personally acquainted with Watt, personally arranged for him to have a position at the University of Glasgow, and that that was specifically to work and improve the University's own steam engine. Watt remained at that post for a decade or more if memory serves, much of that prior to the publication of Wealth in 1776.


The concept of "conceptual integrity" is one of the most useful things I've ever learned. The tension between conceptual integrity and things like group-based communication and requirements-gathering (what one might call "representativeness") seems to me to be a foundational issue not just in software development but in human civilization as a whole.


This is nothing new. In 1913, Max Ringelmann measured the effort of individuals when working in a group. The results are impressive: https://gallica.bnf.fr/ark:/12148/bpt6k54409695.image.f14

A group of 8 persons provides the same amount of work as 4 individuals.


Yep, This is why small teams work best. It also helps in maintaining better overall "Conceptual Integrity".

Ringelmann effect - https://en.wikipedia.org/wiki/Ringelmann_effect

Social loafing - https://en.wikipedia.org/wiki/Social_loafing


> As more people are added to a project, the complexity of communication increases exponentially.

Doesn't it increase by n^2? as per the picture with the graphs?


Laypeople use "exponential" to mean "superlinear". It's fine, probably.


Laypeople use “exponential” to mean “fast”, with what “fast" means varying by speaker and context.


Brooks wrote that it was quadratic, so that seems to be an error in the summary.


I've always had a little bit of a gripe with how the "communications complexity" is presented here. As if the only way to communicate on a team is to have everyone stand in a circle and yell at everyone else.

In reality, there is very often opportunity to take 1 project with ~3 engineers, and break it into 2 smaller projects each with ~3 engineers and run them mostly in parallel. Do your best to isolate those projects, but have a point of contact (EM, PM, tech lead) between the two teams to coordinate whatever dependencies are unavoidable, etc.

You'll notice, that this is just a smaller microcosm of how every company is actually structured anyway. There's still diminishing returns, but most people on the team never need to communicate directly with people outside of their project.


Is this not just a different manifestation of the third key observation?

"Division of Labor: There’s a limit to how effectively a task can be partitioned among multiple workers. Some tasks simply cannot be divided because of their sequential nature, and for those that can be divided, the division itself can introduce extra work, such as integration and testing of the different parts."

Just replace 'workers' with 'teams'.


Pretty sure it simply represents the upper bound of communication complexity. Any management of it can improve coordination. The conceptual lower bound is that each additional programmer adds 100% more programming speed.


The book only takes like an hour to read you know.


When you try to read and discuss as a group, it takes much longer.


You can read 336 pages in an hour?


The pages are very sparse and there’s lots of obsolete sections you can skip. The meat of it is in like two chapters.


I read it in the 1970s as a youngster, now retired, brilliant then and now.


I'm so sorry but I read it as "the Mythical Moth Man" for a second


very Kafka. If only the industry had adopted your 'vision'...

Also, very Norm MacDonald (RIP).


Another aspect discussed in TMMM not present in this summary is the possible benefits of AI.

Brooks says there can be some gains, but no silver bullet :)

- As a Testing Agent that learns how the system behave and how to test it as the developers interact with the agent.

- As a tutor, junior can learn from the knowledge of experts by interacting with the AI.

- For "automatic" programming when the problem is characterized by few parameters, when there are many known solutions and good knowledge to select the correct solution.

So far I've read about tutoring and automatic programming, but I haven't read about how to use AI to learn about the system and generate tests.


once, when complaining to a colleague about our workplace and their hiring and staffing idiosyncrasies, I quipped "I should give <manager> a copy of TMMM". My colleague, without missing a beat said "You should give him two copies so he can read it faster"


>”give him 2 copies so he can read it faster”

Gold! My week is made, no matter how many deadlines I blow past.


I'm not sure if reading the book would help. If the manager has a technical background (e.g., they have worked as developers before) then they already know the main point of the book. If the manager does not have a tech background, then there's little that the book can do for them.


> they already know the main point of the book.

If the only important information in the book was its main point, it wouldn't have needed to be a book. It could have been a leaflet. Or a bumper sticker - those can be very catchy.

It's worth reading the book for all the other words it contains.


So this book is... not for people with a tech background, AND not for people without a tech background? Should I put the cat in the box now?


They might have been making that joke and nobody got it.


You believe every tech oriented person already knows everything there is in the book?


No, but most will at some point in their career hear about it in comment sections on HN and the like, or in many different forms hear the adage that adding more people to a project makes it later.


Many developers don't spend time on hacker news or even any of the similar forums. To be stereotypical, HN readers seem more likely to be young, working for FAANG in Silicon Valley, and not a middle aged sharepoint integration developer living in the Midwest (although obviously they are also on this site).

I've had developers who treated it like a job, did it, went home, and weren't interested in the Mythical Man Month (but intuitively knew some of these principles).


Everyone experiences things daily without fully appreciating the patterns and insights that could be derived from it (would be exhausting if we did)

The book most definitely helps tech workers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: