
Leverage Points: Places to Intervene in a System - openfuture
http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/
======
dkarapetyan
I've recently started reading up on systems theory and cybernetic systems and
contrary to what the names suggest learning more about the theory does not
make you better at designing systems. You get much more attuned to how large
and complex systems fail but there is no crank you can turn that will provide
insights and then let you implement changes that will lead to positive
progress.

No matter what you do at the end of the day you have to still convince people
that what they're doing is probably wrong and no one ever wants to hear that.
Cognitive dissonance and sunk cost thinking almost always trump any kind of
analysis and the system continues to operate as it has always operated until a
large enough shock shakes it and changes are made. Most often the changes are
made too late and great human misery is the result. Current climate issues is
one prominent example that comes to mind. Uber and Zenefits being the other
examples that I can think of were earlier interventions would have helped.

~~~
kevinr
I've been learning about systems theory for about five years now, and I've
been frustrated by the same things, but I've come through them to something
which I think provides a path forward.

I think changes which lead to positive progress are hard, and the bigger the
problem or the higher up Meadows's leverage list, the harder the changes are.
Cognitive dissonance and sunk cost thinking and other cognitive biases do
increase the challenge as well. But I do see examples where I and others were
able to make positive changes from our understanding of systems theory. I see
making these changes as a marathon, not a sprint, so try to adopt an attitude
of celebrating every step of the way ("dayenu"), and I also look to work like
Nancy Leveson's[0], which tries to systematize the process of turning a
systems understanding into actionable feedback.

It's often perceived to be the case that earlier interventions would have led
to fewer losses, but it's important to adopt an understanding that the past is
past, and all we can do to move forward is to change our behavior now. As the
man says, the best time to plant a tree is twenty years ago, but the second-
best time is today.

[0]: [https://mitpress.mit.edu/books/engineering-safer-
world](https://mitpress.mit.edu/books/engineering-safer-world)

~~~
dkarapetyan
I have that book. It is were I learned the term "socio-technical system". It
is a great book and I recommend it to people whenever I get the chance. It is
also much more than just safety engineering and is one of the more clearer
expositions of cybernetic systems that I've read.

------
saosebastiao
One of the more interesting theory-driven rabbit holes I've dug into was the
bullwhip effect in inventory management, which was a failure mode model
popularized by Jay Forrester, who was Donella Meadows' mentor and colleague.
One of the interventions that can be taken to minimize susceptibility to the
bullwhip effect was to decrease information flow latency. In other words,
faster information about demand fluctuations improves the ability to respond
to them. This type of intervention is ranked #5 by Donella in this article.

The problem with that intervention is what happens _after_ you've improved
your information flows. Faster information flows about demand leads the
profit-seeking enterprise towards tighter inventory tolerances. They eliminate
"extra" safety stock that is no longer needed due to their faster information
flows. And it works out phenomenally well...for a while. But strong demand
fluctuations can still appear, and without equivalent response mechanism
improvements, the risks become more fattailed: failures become less common but
much worse in severity. And in context of this article, I'm claiming that a
leverage point ranked at #1 (the profit incentive) overpowered a leverage
point ranked at #5, ameliorating its benefits.

The new topic du jour on HN seems to be self driving cars, and one of the many
claims is their ability to improve traffic, and one of the other claims is
their ability to improve safety. And interestingly, a systems model of these
claims would likely show situations that are suspiciously similar to a
bullwhip effect. In other words, it is a stateful model with latent
information flows (roadway conditions), physical responses to the information
flows (brakes), and buffers (space between vehicles) which protect from
failure due to information latency and reaction capability.

Self driving cars can improve upon baseline reaction times to changing road
conditions. They have more and better sensors and well known algorithms for
detecting dangerous situations. This fact isn't speculative at this point, we
already have some proof of it [0]. The question becomes, what do we do with
that improved information flow? Do we tighten buffer tolerances? If so, you
improve roadway capacity the majority of the time and possibly still reduce
the risk of accidents...but what happens to accident severity? Maybe traffic
throughput isn't the be-all objective that we want it to be, and we should be
content to let that information flow improvement result in increased safety
and traffic resilience instead.

[0] [http://www.nbcnews.com/tech/tech-news/tesla-autopilot-
begins...](http://www.nbcnews.com/tech/tech-news/tesla-autopilot-begins-
braking-wreck-driver-n700941)

~~~
tlb
Society has a risk homeostasis, which partially compensates for improved
safety by taking more risks. Seat belts and air bags both resulted (over
generation time spans) in higher driving speeds. So the argument goes that
eventually self-driving cars will drive faster and closer, and overall traffic
deaths will remain constant.

The difference with self-driving cars is that the policies affecting risk (how
fast, how close) will be decided by vehicle makers, who will be very risk-
averse. Risk homeostasis is mainly driven by individuals thoughtlessly taking
risks. But when enterprises make deliberate decisions, like they do in
commercial air travel, safety can be improved indefinitely.

~~~
Nomentatus
That or the proliferation of ever larger numbers of large aircraft has kept us
in the usual homeostatic situation, but with an ever-larger (growing) pool, so
that the lives sacrificed in order to restore full attention to safety go much
further, over many more aircraft. Only once you've hit max airliners can you
judge whether homeostasis has been eluded. Until then, increasing safety is
most likely just one more consequence of increased economic scale (and
concomitant efficiency), rather an independent change. Same with the price of
the tires.

------
cykod
There's a WikiPedia page on this that provides a nice overview -
[https://en.wikipedia.org/wiki/Twelve_leverage_points](https://en.wikipedia.org/wiki/Twelve_leverage_points)

