No matter what you do at the end of the day you have to still convince people that what they're doing is probably wrong and no one ever wants to hear that. Cognitive dissonance and sunk cost thinking almost always trump any kind of analysis and the system continues to operate as it has always operated until a large enough shock shakes it and changes are made. Most often the changes are made too late and great human misery is the result. Current climate issues is one prominent example that comes to mind. Uber and Zenefits being the other examples that I can think of were earlier interventions would have helped.
I think changes which lead to positive progress are hard, and the bigger the problem or the higher up Meadows's leverage list, the harder the changes are. Cognitive dissonance and sunk cost thinking and other cognitive biases do increase the challenge as well. But I do see examples where I and others were able to make positive changes from our understanding of systems theory. I see making these changes as a marathon, not a sprint, so try to adopt an attitude of celebrating every step of the way ("dayenu"), and I also look to work like Nancy Leveson's, which tries to systematize the process of turning a systems understanding into actionable feedback.
It's often perceived to be the case that earlier interventions would have led to fewer losses, but it's important to adopt an understanding that the past is past, and all we can do to move forward is to change our behavior now. As the man says, the best time to plant a tree is twenty years ago, but the second-best time is today.
This book is an attempt to apply systems theory to our political system. I recommend it as a concise explanation of a perspective that highlights the faults in our current way of thinking. What is needed is a common reference system for people to have productive political discussion and I feel systems theory provides that, if only the information was more widespread.
Plenty of things are simply too big for one person to do.
The problem with that intervention is what happens after you've improved your information flows. Faster information flows about demand leads the profit-seeking enterprise towards tighter inventory tolerances. They eliminate "extra" safety stock that is no longer needed due to their faster information flows. And it works out phenomenally well...for a while. But strong demand fluctuations can still appear, and without equivalent response mechanism improvements, the risks become more fattailed: failures become less common but much worse in severity. And in context of this article, I'm claiming that a leverage point ranked at #1 (the profit incentive) overpowered a leverage point ranked at #5, ameliorating its benefits.
The new topic du jour on HN seems to be self driving cars, and one of the many claims is their ability to improve traffic, and one of the other claims is their ability to improve safety. And interestingly, a systems model of these claims would likely show situations that are suspiciously similar to a bullwhip effect. In other words, it is a stateful model with latent information flows (roadway conditions), physical responses to the information flows (brakes), and buffers (space between vehicles) which protect from failure due to information latency and reaction capability.
Self driving cars can improve upon baseline reaction times to changing road conditions. They have more and better sensors and well known algorithms for detecting dangerous situations. This fact isn't speculative at this point, we already have some proof of it . The question becomes, what do we do with that improved information flow? Do we tighten buffer tolerances? If so, you improve roadway capacity the majority of the time and possibly still reduce the risk of accidents...but what happens to accident severity? Maybe traffic throughput isn't the be-all objective that we want it to be, and we should be content to let that information flow improvement result in increased safety and traffic resilience instead.
The difference with self-driving cars is that the policies affecting risk (how fast, how close) will be decided by vehicle makers, who will be very risk-averse. Risk homeostasis is mainly driven by individuals thoughtlessly taking risks. But when enterprises make deliberate decisions, like they do in commercial air travel, safety can be improved indefinitely.
But they aren't the only players in the system. Traffic is a political problem, and how politicians react to traffic issues regularly becomes a campaign issue. I have no problem imagining a world where a politician campaigns on requiring self driving car manufacturers to standardize on rules that reduce traffic.