I made this mistake once, albeit with Newtons second law instead. I was explaining my estimations for changing some key processes in a company. As a kind of mental aid, I used 'F' as the total effort needed to make the change, while 'm' represented the size of the organization, and 'a' would represent the speed we wanted to perform the change with. F = m * a. A simple way of illustrating the exponential relationship between these factors.
But it blew their mind. It took on 'mystical' properties, as if we had distilled change management into pure Newtonian motion. I started seeing not only F = ma in memos on other issues, but also other equations like E = mc^2, trying to wrestle management lingo into relativistic conservation of energy.
> I used 'F' as the total effort needed to make the change, while 'm' represented the size of the organization, and 'a' would represent the speed we wanted to perform the change with. F = m * a. A simple way of illustrating the exponential relationship between these factors.
Am I missing a certain insight or did you mean the linear relationship not exponential?
But using exponential to mean a linear equation is all sorts of wrong, even in a colloquial sense.
I would understand a colloquial meaning of "exponential" as "it grows faster than linear". Wrong, but it makes informal sense. But using is as a synonym for linear makes no sense whatsoever.
There were a bunch of answers, but this is the one that is the most handy in my toolkit.
Linear growth of A on B: For every new B, k new As emerge, for some k (could be and often is fractional).
Quadratic growth of A on B: A grows approximately linearly with the number of handshakes in B, so that in the above language, for every new B, k × B new As emerge, one k for every existing B that the new B could interact with.
On this basis one would say that the number of bugs is quadratic in the lines of source code. Each line has some small probability of interacting with some distant line in some non-negligible buggy way.
Exponential growth of A on B: A grows approximately linearly with the number of connections between A and B, so that in the above language, for every new B, k × A new As emerge, one k for every A that the new B could interact with.
So for “knowledge increases exponentially,” the precise technical claim is that it increases exponentially with time, and this means (somewhat dubiously for long scales) that in a given period of time, every every fact you know has a constant probability of generating a new fact which you now also know. This holds when you don't know very much about a new scenario but tends to be tempered out rapidly, the phenomenon of “low hanging fruit” etc. A meme similarly has an approximately exponential region where the number of folks exposed to the meme drives further exposure to the meme, but this dries up as the probability of “already shared it!” rises and curtails that “I must share it with friends” impulse.
Personally I suspect that as (slightly) false and that it's actually something like quadratic-hyperbolic growth, which looks awfully similar to exponential growth if you don't zoom in.
humans learn words through contextual exposure to them.
exponential has two meanings depending whether you studied and understood what exponentials are in a math class, or did not.
if you did not but you've learned the word through repeated verbal contexts, it means "bigly"
also, in the example given of F=ma, the "a" is characterized as "speed" (per second) which it is not; but rather "acceleration" (per second per second) which is related to speed via an integration over time (seconds), but that's a f(unctional) relationship as is exponentiation, so from context I think people pick up that exponential means something like biglyly
(When I read the Mysticization title I was prepared for a pun on the game Myst, but it didn't show up. Myst and SimCity date from the same period of time, and SimCity is a system dynamics game. I was disappoint.)
I hope one day to possess an explanation of why that behaviour is sometimes good. Since, like you, I am alarmed/concerned by it. But clearly, largely, its what makes "a lot of the world go round".
This emotion, "wonder", is profoundly suspicious to me; and I'm often hostile to it. I am sure I'm missing something...
People are trying to copy the smartest person in the room, but don't themselves have a good way of evaluating how smart an idea is. When that strategy produces a miss, it produces a rather absurd one (like in this anecdote).
But the strategy itself is good, particularly if the group correctly identifies who has the best ideas. Copying smart people works if you can find them.
> Copying smart people works if you can find them.
Well, works more often than not (I hope your downside risks are low). While understanding what they are doing works almost every time.
Besides, it works more often than not, in a simple setting where there is no antagonistic communicator. In a word with propaganda and politics, it fails almost every time.
> understanding what they are doing works almost every time.
To be fair, understanding what they are doing requires being smart yourself (usualy not as smart as coming up with it in the first place, but only usually), and while smartness isn't always or completely unlearnable, in practice it's usually hard and/or impractical, especially if you're not smart to begin with.
Of course, distinguishing (honest) smart people from (possibly smart) con artists also generally requires being smart, so that doesn't help much.
You cannot factor the strategy from its implementation here. I think that was trying to be your point but then you contradicted yourself. I must encourage you to commit. A dumb person who tries to find smart people will find only con-artists. Its the law of lemons.
A strategy doesn't have to work every time to be good. "Dumb people" can and do find people smarter than themselves and put in an effort to copy them. Usually that is a good idea - better than trying to go it alone, any way.
That is a slightly different situation though, isn't it? The US presidential elections are a choice between 2 options, usually both bad. The results suggest all 3 candidates in the last 2 US elections are borderline unelectable and both parties are engaged in a bizarre war to find the worst candidate people will vote for.
Which is not at all the same as the fairly typical group dynamics.
Wonder is what drives people to be interested in and want to learn about how the world works. It's a very important emotion to let thrive. But, like all good things, too much of it can become harmful. Wonder is best when tempered with rigor. It is a starting point, but it must not also be the ending point.
If you can't experience wonder, that must make you history's most recent example of someone who's basically figured everything out. That must be very nice for you, but you are assuredly wrong about many things that you know you know.
So you claimed "total effort equal orgs size times the speed you wanted change in?" And that that was equivalent to F = m * a?
That's not mind-blowing so much as jaw-droppingly wrong. Org size is an exponential factor for things like that. And speed of change isn't the issue for orgs so much as changing direction under the current momentum, if we're using a physical model. No wonder it wasn't met with widespread acceptance.
It was a vast oversimplification that mapped badly to the situation but delivered, no doubt, with confidence, so of course it was adopted by copycats. Simplicity and overconfidence sells.
I created a fairly popular system dynamics modeling application back when I was a grad student [0].
When using modeling tools like system dynamics, it's useful to keep in mind George Box's quote that "All models are wrong, but some are useful".
When using a modeling tool to describe any form of social system you're creating an imperfect copy of it. This imperfect copy embeds what the modeler (rightly or wrongly) views to be important and how they believe the system to work.
The resulting model, though always wrong to some extent, may be useful. It may help you obtain a better understanding of a system and cause positive change in an organization. On the other hand it may not be useful, it may even mislead you.
You can think of modeling a lot like various software engineering practices like Agile. Sometimes these help teams, sometimes they don't. At the end of the day though, it's really about the teams using them not the specific techniques.
This article is not a complaint about system dynamics. The complaint is about the "mysticizing" part.
If you look at the first example, it's a person that mimicked all the procedures and jargon of system thinking, while ignoring all of the most fundamental knowledge of the area. In particular, the most fundamental insight about humane systems - that the system reacts to the optimization - goes so over the person's head that there isn't even a whoosh.
System dynamics is good, and very useful. But you have to understand it to apply it.
The article clearly lays out two complaints, the first is “mysticism”, the second is “mechanization”. Personally I don’t think there’s anything particularly problematic with the latter model.
I might be applying my own mysticism about the original authors of the posts that this article speaks; but I think people are missing the point.
Yes. 100%. The map is not the territory here. The models presented in these examples are not perfect representations of the teams that people have built at all companies. It is easy and fun to poke holes in them as we're prone to do.
But for whatever it's worth, software delivery absolutely is a system. The level of understanding about how that system operates is all over the place. Some managers have absolutely zero idea about how their software goes from "issue open" to "its running in production" and they need to. Engineers need to as well. If it's informally occurring or informal knowledge, it should be made explicit. The part where we all get our back up is when a manager goes "improve this metric" without understanding system dynamics. The example about constraints stands out to me. If you're going to optimize a hot path in your software, you don't throw your hands up and go "All models are wrong!" You can actually instrument the way it works and see which method or areas require attention and thought. If we take these as analogies and apply them to how we eliminate variability then life is generally better for everyone.
I do tend to agree however, that these things are not leadership. But leadership requires situational awareness. Situational awareness is only acquired over time by "intuition" (which is often wrong) or by explicitly writing it down. Intuition is very difficult to explain to other executives, managers, and yes even software engineers. People like tangible things that reduce the resolution of the objects they represent and that's not going to change any time soon.
A system/organization with a sufficient number of members will create emergent behaviors that no individual leader (or engineering leader) can account for. That's the domain of culture. See religion for examples of successful ones. A key characteristic is that the individual is meaningless and at best only a channel through which the greater-than-individual interactions play out.
So, when an individual applies system modelling to the situation -- they will necessarily overemphasize the portions they understand and believe while FAILING to incorporate anything representing the behaviors they aren't aware of.
People like comfort blankets. System modeling is a particularly powerful one. The actual applicability of the practice is a question to be left to evolution.
A while back there was a post that said in essence: "queues grow to inifinte size when the consumer is approaching 100% utilisation"
However it went deep into lots of theory, which made it all sound very sciency and clever. However it failed to get across the simplicity, which even a 9 year old can grasp:
Queues are like sinks, if you put water in at close to the rate that the drain lets it out, the water level of the sink doesn't drop, any slight imperfection makes the level rise.
animated gif, and be done.
Instead it dressed up queues as some complex and difficult to predict beast, that only big companies with very clever people can use.
I don't think your analogy is actually right. If the water pouring into the sink delivered more water in a second than the drain could handle, that'd be more than 100% utilization.
It's actually kind of surprising that you can end up with an queue going to infinity when you can process N events per second, and fewer than N events per second, on average, are occurring. I think it has something to do with gambler's ruin and the fact that continuing to bet $1 on a fair 50/50 game will eventually bankrupt you, every time, even though the game's "fair."
> If the water pouring into the sink delivered more water in a second than the drain could handle, that'd be more than 100% utilization.
this is where the sink bit comes in. For small amounts, queues are great at absorbing load. for example, you can dump two or three who glasses at once into a sink, and the drain will be fully utilised. (try doing that by pouring the glasses concurrently into the drain pipe at once.)
> fewer than N events per second, on average, are occurring
average is doing a lot of work here! unless you are very lucky, an average can hide a lot of variance. If we go back to the sink analogy, pure water flows away more or less at a constant rate (if we ignore whatever the spinny whirlpool effect is) The variance in processing time is very little.
However, if we start dumping washing up water in there, lumps of food will start to clog the drain. on average, assuming no blockages, processing time increases a little bit. but when we get close to peak capacity, a blockage will cause an overflow thats very difficult to clear. This is because there is no extra capacity to get through the backlog.
the rule of thumb is that as you get close to 80-90% of your consuming capcity, your queues will begin to expand.
when you are designing your stuff two things might save you: expiring keys, and bounded queues.
The first one means dataloss that you need to handle yourself. the worst part is, you might not get informed that your message is lost. So that needs careful thought
the second one means that things pushing into the queue need to check the state of it before pushing. Back pressure is always good. At least the client knows _before_ the data is sent.
I think it’s similar to that, with some critical point at which the behavior makes a discontinuous change — as seen from the examples:
- the gambler goes broke
- a sink carrying over 100% capacity can jam
In general, if you have some kind of system where there’s a “critical threshold” and a discontinuous (or even non-linear) result for crossing it, then approaching that boundary will be increasingly dominated by that effect, as the distribution of events starts to cross into that regime.
For the example of a queue, the odds of a packet failing due to delay suddenly spikes as you approach “full capacity” because the variation in packet timings causes it to “cross over” that boundary for brief periods, the closer the mean of the distribution moves to the critical threshold.
Is it just me, or is the author making a lot of soup from very little meat?
E.g., the first bit:
> > For example, if you don’t have a backlog of ready commits, then speeding up your deploy rate may not be valuable.
> Does this mean speeding up deploys to 1 minute is valueless? That’s crazy talk!
That looks to me like a blatant straw man. The quoted author said "may", meaning there may well be other reasons that faster deploys is better. And there are, for a different model which the article's author suggests but doesn't bother to explain with the rigor that he's expecting from people talking system dynamics.
For any intellectual tool, some people apply it blindly and some put in the work to use it contextually. Can you use SD approaches blindly? Sure. But in my experience it's way less common than the the person using an implicit model (e.g., the tiresome "software is a factory" analogies).
While their overall point is valid, they seem to be overly tolerant of straw in the examples they knock down. Eg:
> Hale's story misses that the slower pace at large organizations is less about the growing size of the team and more about the growing size and complexity of the codebase and the product.
In a Hale-like model, the growing size and complexity of the codebase and the product are communication and contention costs. Unless you wrote the entire codebase yourself (and probably even if you did), you are communicating and contending with the other authors of that codebase, using the codebase itself as a communication medium. Getting rid of other communication and contention costs provides a constant-factor improvement to a process that is still asymtotically quadratic. (Ie, O(a*N^2+b*N^2+...) is still O(N^2) as long as any of a, b, ... are nonzero.)
That's pretty silly. What "shared resources" am I contending with previous developers for? It's not like "oops, I can't deploy to the QA environment now, it's in use by somebody who left in 2015"
Also all the remedies he suggests (keeping your responsibility assignment matrix small) would make absolutely no sense if "communication costs" was dominated by the cost of communicating with developers in the past. By any reasonable reading communication costs refers to developers simultaneously working on a task (in analogy to parallel processes).
> What "shared resources" am I contending with previous developers for?
FTA:
> CI takes 30 minutes to run when it used to take 3? Fire everybody, and it'll still take 30 minutes to run.
You are contending with them (specifically, the code (tests, etc) they wrote) for CI runtime. The code doesn't go away just because the developer who wrote it is gone.
> all the remedies he suggests
Are specific to the low-hanging fruit of scenarios that do not include firing every other developer - I wrote "In a Hale-like model" for a reason.
I think, if I had stopped at "hey look, Will Larson's model in this post wasn't realistic, this proves there's a bunch of people out there being mystical about system dynamics", this would certainly be a straw man -- but my argument went further. It's "hey look, Will Larson's model in this post wasn't realistic -- in fact, it's impossible to use stock and flow modeling to tell a realistic story about this subject matter, yet nevertheless here is system dynamics being peddled as useful in basically all contexts as a fundamental skill of leadership" which is not a straw man.
It seems to me that it's impossible to tell any realistic story. The point of stories is abstraction such that we can see something non-obvious. "All models are wrong, some models are useful." So to me the appropriate use of SD models is not "perfectly model the vastness we are confronted with", but "find a model we can use for a while to get insights that we can use to try out possible improvements". Stock-and-flow models can indeed do that with software development sometimes.
If people are saying that X is the One True Way to Think, yeah, that'd be wrong no matter the X. But from what you wrote, I'm not seeing that.
> Engineering leaders, in my view, are gardeners, not bottleneck-clearers. As I wrote above, the priority of leaders should be culture - psychological safety, pride of workmanship, compelling narratives, and such.
I've seen this sentiment go way too far at times. A leader's job is to make the organization more effective. Sometimes that means clearing a bottleneck, sometimes it is building a culture. If you think your job is that specific, you're one situation away from losing it.
> I understand the appeal. We’re engineers. We like building things. System dynamics lets you construct an argument by building a model. Provocative title aside, I do appreciate this style of argument. Some very thoughtful writing is done this way. It is explicit and engaging; it can even be visual or interactive.
No. You're not engineers, you are software developers.
While both may require a certain amount of skill and expertise, you're throwing around titles casually where it doesn't make sense. In most cases, engineers deal with the physical world and the laws around it, and the product in terms of an engineering solution is normally physical/materiel.
Software development particularly when it does not include a physical component is intrinsically different because the fabrication aspect of the physical world applies a great deal less. For example, if you are at a critical design review of a jet engine, the refabrication and fixes of components may require huge amounts of time. In software development the output is code, and although person hours and work is involved in modifying/changing it, it has inherent plasticity.
This is why more iterative methodologies make a great deal more sense in software than do engineering oriented development processes.
As a fan of SD, I agree that modeling even just very specific human attributes is so incredibly difficult, and I'd say even impossible. Humans are rarely uniform.
There are some population patterns that can be loosely modelled for large sets, but those are also very loose approximations at best. And the smaller (eg more individualism breaks through) the worse the model performs.
And software teams are generally too small, as any useful model would probably have to model individual or at least multiple stereotypes (but I'm reluctant to give those too much weight anyway)
Read anything about mysticism and see why that’s a lazy person definition. In any case, the term is not appropriate as a description of “appeal to authority.”
The word's root just means "full of mystery", which is completely appropriate in this context -- describing people who accept the authority of fancy equations as a mystery, instead of delving deeper for a justification from first principles.
It's true that mysticism also has a more specific meaning i.e. describing particular strains of religious or philosophic thought and practice that are tied to the idea of "mystery" in a different way, but I see no reason this meaning should have a monopoly on the word, considering the etymology, see https://www.etymonline.com/word/mystic?ref=etymonline_crossr...
Words can have multiple meanings. And, some of those meanings can be based on ignorance, bias and sloppiness. It is fair to critique the use of words when they obscure actual meaning.
Ignorance of what? Bias against who? Religious mystics?
That's like saying it's wrong to use "charismatic" to mean "attractive" because that word needs to be reserved for religious charismatics, or that it's wrong to use the word "agnostic" to mean "noncommital" because that word needs to be reserved for the religious gnostics and agnostics, or that it's wrong to call have "developer evangelists" because that word needs to be reserved for religious evangelicals.
Also, appeals to authority is "person X says Y, person X has a PhD, therefore Y". There is no "Person X" here. The authority, if there is one, is a mysterious authority possessed by the form of the argument itself. Or, you might say, a "mystical" authority.
The author makes the point that it’s not more engineers (and the corresponding resource contention problems) causing slower delivery at large companies, but complexity.
However one of the driving forces behind complexity is the amount of engineers. Microservices largely exist as a way of enforcing boundaries to allow more people to work together.
I think the article has a number of good points, though suffers a bit from arguing against strawmen as other commenters point out.
I also disagree with the claim that "A system dynamics model, properly considered, is just an analogy dressed up in a bit of formalism." The main reason I use a system dynamics modeling tool is to reveal aspects of interactions in the system I was not aware of, and to provide some predictive power regarding how the system may change over time. This is not something a (mere) analogy is able to provide. You could counter-argue that a good system model is just a _really good analogy_ but an analogy nonetheless, but I think that is taking the concept of analogy too far. Why not just say that physical equations are also "just" analogies?
Hmm, medical scientists experiment on rats and then make medical predictions about humans based on analogy from rats to humans. At least, I classify this as an analogy, and wouldn't forbid analogies from having predictive power.
Re: "reveal aspects of interactions in the system I was not aware of" I think this is an interesting point. I definitely am willing to give persuasive force to e.g. theoretical microeconomics, which like system dynamics operates by analogy from a mathematical construct to humans. I think in principle I could believe a non-obvious prediction made by a system dynamics model where I felt that the mathematical constructs in the model were a good analogy to humans -- but the prediction would have to be robust, i.e. it shouldn't vanish if you change the parameters of the model or tweak the assumptions of the model.
Theoretical economists tackle this by making their results super general using abstract math and characterizing the solutions to systems of inequalities in the most general form that they can.
I wouldn't classify rats<->humans as an analogy in this context. Generalizing (too broadly, probably) premodern medical "science" was explicitly analogical in many ways, e.g. "we will use this herb that looks like an eye to cure your eye trouble" but that is obviously completely different than the experimental/genetic basis on which we use rats as a medical model for humans.
OTOH, since arguing over semantics is mostly a waste of time, if you want to call it an analogy, then fine, but I would say you need to drop your "mere" from the original article, which is doing a lot of the work in that argument, since clearly analogies in your view are quite powerful, have predictive power, etc. I hope this doesn't come across as overly pedantic, I just find thinking about these things very interesting.
Not sure I understand your point about such models needing to be robust in the face of different parameters. I was working on a model the other day which I discovered had very different behaviour based on the initial parameters in a way that will have direct impact on the software I'm writing/business process I'm developing, making it safer and more predictable. Maybe that outcome should have been obvious, but it wasn't to me (or at least, I suspected something like it, but wasn't able to reason through to the conclusion without the aid of a model). In this case, I consider the fact that different predictions were made based on altered initial parameters to be a strength, not a weakness. But going back to your point "you should only give it force if the stories it tells are plausible in light of your experience and knowledge," I agree and if the model were outputting very surprising and counter-intuitive predictions, I would start by assuming the model was wrong, not my intuitions.
While agreeing with post, I wish SD software was more affordable. Major players (Stella, for example), charge 4K$ for a single license. With that entry barrier, it's no surprise that SD is still a niche thing for nerds that tend to fall in love with their models.
Imagine that in any govt, non-profit or commercial organization, people use some sort of well-known extra user-friendly and smart SD tool for quickly building and visualizing their mental models (not just about software teams, but on any problem domain). Instead of clumsy drawings on whiteboards we'd have SD visualizations that anyone in the room can argue about and collaborate.
The passage about constraints is technically technically incorrect due to Liebig's Law of the Minimum and the fact that every recursion has a fixed point.
Thankfully, this doesn't actually invalidate the point the same passage makes, but it sure does reframe it a lot.
I agree with the author that applying systems dynamics to software teams and organizations might be too rigid and inevitably fails when you treat humans as static and unchangeable. I personally despise single metric optimization.
However there is value in applying systems dynamics to software teams and treating humans as probabilistic variables. And instead of maximazation, one can do expectation maximization i.e. finding the average outcome. Modelling such a system is hard and intutions play more of a role here because our brains are well suited to probabilistic thinking (to a degree for simple systems).
But I can see why leadership gurus don't prefer the probabilistic dynamics. Because it involves "luck" and asking leaders to do the optimal thing i.e. sticking to the expected maximum (average outcome) isn't sexy and doesn't lead to winners and losers. Instead, in a capitalistic system, all leaders are expected to engage in risky decision making and every once in a while one leader wins, standing on the shoulders of a good software development team which also happened to be on a hot streak that quarter, whereas other leaders who didn't win are blamed for the poor performance of their equally good software development teams who happened to be in a slump that quarter.
But it blew their mind. It took on 'mystical' properties, as if we had distilled change management into pure Newtonian motion. I started seeing not only F = ma in memos on other issues, but also other equations like E = mc^2, trying to wrestle management lingo into relativistic conservation of energy.
Lesson learned, I guess.