Whether it's about the economy at large, your own household, a supply chain, what have you - as soon as you optimize for efficiency by removing friction, you take all the slack/damping out of the system and become instantly more liable to catastrophic failure if some of your basic conditions change. Efficiency gives you a speed bonus, at the cost of increased risk / less resilience to unforeseen events.
Stewart Brand's concept of "Pace Layering" comes to mind for how to deal with this at a systemic level - https://jods.mitpress.mit.edu/pub/issue3-brand/release/2
In statistics, there is a slight variant of this thesis that is true in a precise formal sense: the tradeoff between efficiency and "robustness" (stability in a non-ideal situation).
For example, if you have a population sample, the most efficient way to estimate the population mean from your sample is the sample mean. But if some of the data are corrupted, you're better off with a robust estimator - in this case, a trimmed mean, where the extreme N% of high and low values are discarded.
The trimmed mean is less efficient in the sense that, if none of the data are corrupted, it discards information and is less accurate than the full mean. But it's more robust in the sense that it remains accurate even when a small-to-moderate % of the data are corrupted.
rather than robustness, i prefer to use the term resilience, a dynamic quality, since efficiency is also a dynamic quality. you can trade efficiency for resilience and vice versa (as the parent poster switched to later).
i should add that i don't entirely agree with the thesis of the article, which exhorts us to slow down, thereby trading efficiency away for resilience. there are a number of ways to add resilience (and trade away efficiency), and in some cases, slowing down might be the best, but it's certainly not the only, or best, option in most cases.
for housing, an example used in the article, we could add more housing to create resilience, which requires reducing friction, like increasing the throughput of permitting/inspections while generally reducing zoning/regulations.
Since the context here is that efficiency can remove layers of redundancy therefore allowing disruptions to wreck more havoc - I believe that's what OP was getting at.
But inefficiency isn't necessarally more robust unless the extra bits serve some purpose.
It seems already the 21st century is seeing a more balanced emphasis on theory vs. real world applications though.
The kind of outlier-culling technique suggested by civilized is not recommended these days because it adds unprincipled choice points to what Andrew Gelman calls the 'Garden of Forking Paths' [1, 2]. Thus they are bad for hypothesis testing, which tends to be what most statisticians care about.
Additionally the technique obscures the relationship between the variance of the sample and the population variance if we do not have reliable knowledge of the population distribution; likewise for the mean if the mean is not close to the mode. These problems can be quite dramatic for long-tailed distributions.
1. Trimming for mean estimation, which removes extreme values in an algorithmic fashion
2. Subjective removal of outliers based on researcher judgment (this is the garden of forking paths Gelman talks about)
3. Estimating other distributional properties, such as the variance, with trimmed estimators
These are all different things and come with different theoretical and practical risks and benefits. Trimmed means are perfectly good statistical tools, although they have their limitations like anything else.
The choice of N used in cutting out the N% most extreme results is not determined by widely accepted statistical best practice. Hence it is a source of forks. The algorithm might be deterministic but the choice of this parameter isn't.
My discussion of distributional properties was another issue concerning this technique. You seemed to have missed the point that dropping extreme points can also lead to biased estimates of the mean.
Ten years ago, dropping outliers was considered good practice in the social sciences. Today, it has become a reason for rejection in peer review. There are better techniques for dealing with noisy data, such as adding measurements to data points to measure "badness" that can then be adjusted for in a multi-level model.
Similarly, any estimator can be biased if its assumptions are violated, so I'm not sure why the potential bias of the trimmed mean in particular is an interesting point.
I'm sure that social science peer reviewers have their reasons for their methodological preferences, but trimmed means are great workhorses in other areas of science, like signal processing.
The critique strikes me as potentially valid in its subfield but a bit parochial if it is attempting generality.
I didn't reply to you, but to goodsector, who claimed that statisticians cared focus on efficiency at the expense of reliability. I dispute this.
> Essentially, a more efficient estimator, experiment, or test needs fewer observations than a less efficient one to achieve a given performance.
I think the comment is drawing a parallel to variance (better efficiency = lower variance). Still not exactly the same, I think, but pretty damn similar.
But, erring on the side of efficiency in this discussion is more like over-fitting, which implies an overly complex model. It's making your model too good for one situation, such that it fails to generalize. You'd rather pull back on accuracy and choose a simpler model, in the hopes that it's more resilient to novel observations.
There's another interesting aspect to this in that things that are failures from some perspectives may not be from others.
If stripping resiliency out of a company nets enough savings in the short term, it may still be profitable to the owners it even if it's long-term fatal.
As a hypothetical example, let's say you take a company making $1M a year and trim $19M a year of costs out of it. The company lasts another 10 years and then collapses. You've netted an extra $190M out of that company, or nearly 200 years at their previous rate.
In that case, it's in your local interest to strip the company bare, even if it's not necessarily optimal for your partners, workers, society, or any other stakeholder in this wonderful interconnected world of ours. The benefits are concentrated, the costs are distributed, and there's no mechanism for connecting the two.
Even better, since you now don't actually even need to keep the company alive for 10 years since you've already got your profit, you can sell the company's assets now and increase your profit!
Except there's not a company in the world that this logic doesn't apply to (at some point). Asset-stripping has killed off a few old companies that were ready for it, sure. But it's also destroyed lives and communities that didn't deserve that.
There is more to life than money, and one of the things that this article is talking about is that we need to recognise that.
Does the owner of the company have a right to take risks with the business? It is a serious impairment of ownership if not, and will likely lead to more-stagnant societies. The entire engine of America’s superior prosperity (even at the individual level) has been based on risk taking, while stagnant systems have their own problems (Greece, Italy come to mind with pre-covid crises) and can be a vector of corruption as the principal-agent problem remains unsolved and those entrusted with the well being of the community often work to enrich and empower themselves instead.
This is not to say that we cannot say the community ought to have more of a say, this is merely to point out that there is a tradeoff that affects society in general. If we are to succeed in making this tradeoff it will be in part by better aligning the interests of business owners and the community, and we should be aware of how hard a problem that is when we go to attack it.
This is a story that America tells itself. It's not necessarily true.
And currently the USA in in enormous debt, partly because of the vast cost of bailing out its financial institutions and large corporations. "Risk-taking" is increasingly only being done by private individuals. Large American businesses are certainly not being exposed to the results of their risks - they're being bailed out, socialising the risk but privatising the reward.
In this case, if the community is shouldering the responsibility of bailing out companies that are in danger of collapsing, shouldn't there be some "impairment of ownership" as you put it? Aren't those communities entitled to ask that the company is run for their benefit too?
I'm not even sure it's necessarily an intrinsically bad pattern - there can be short-term opportunities and circumstances that make a strategy worthwhile for a number of years but not further. I think the issue broadly is twofold. First is the collateral damage - companies forming and dissolving is fine for the investors but murder for the employees, who's livelihoods and health insurance become precarious. Second is that we've applied this to the entire economy in a way that makes us incredibly vulnerable to systemic shocks - see America's toilet paper supply between the months of March and July. Again, on an individual company-wide basis, it might still have been more profitable for Procter & Gamble to do whatever the hell it is they did to make 2-ply an impossible technology to reproduce domestically in 2020, but on an economy-wide basis, the fact that Everyone did it was a goddamn disaster.
That's a problem with the social safety net, not a problem with companies failing.
A lot of companies are going to fail even without someone actively trying to drive them out of business.
Some industries just have collapses in demand and no longer do something that anyone wants. Some companies are just mismanaged.
We need to provide support for the employees who are victims of companies collapsing, not try to prevent any company from ever collapsing.
For an awful lot of people, it basically is - I mean, practically the entire field of finance and professional management would look at the example I gave and say, "that's exactly the right thing to do." Mitt Romney's entire professional career follows that principle, and it's largely seen as a net positive to his political career (as a Republican).
I agree with you, I think it's an enormously damaging philosophy, both to the bearer and to the rest of us, but again, it may be locally optimal.
1. Every person accrues $10
2. One person, "Bob", accrues $100, everyone else accrues $1
This is what I mean by local vs global interest - roughly, "in the interest of a given individual" vs "the best outcome across all individuals."
In some cases the old king of the conquered kingdom depended on his lords. 16th century France, or in other words France as it was at the time of writing of The Prince, is given by Machiavelli as an example of such a kingdom. These are easy to enter but difficult to hold.
When the kingdom revolves around the king, with everyone else his servant, then it is difficult to enter but easy to hold. The solution is to eliminate the old bloodline of the prince. Machiavelli used the Persian empire of Darius III, conquered by Alexander the Great, to illustrate this point and then noted that the Medici, if they think about it, will find this historical example similar to the "kingdom of the Turk" (Ottoman Empire) in their time – making this a potentially easier conquest to hold than France would be.
Machiavelli is saying the opposite of what you think he's saying.
He's saying that France's governmental structure makes it (relatively) easy to conquer. Because there are many quasi-independent, competing fiefs, an invader is not necessarily facing a unified front, and may in fact be able to recruit dissatisfied lords to their cause. But that doesn't make it the kind of place you'd want to rule, because once you've conquered it, it's (relatively) easy for another invader to conquer you for the same reasons.
In contrast, there were no fiefdoms in Persia. Unlike the lords in France, the regional rulers in Persia were chosen by the state, and picked for their loyalty. When invading Persia, you are far more likely to face a united front, making it (relatively) difficult to conquer. That said, once you've conquered it, it would be (relatively) easy to hold for the same reasons.
Otherwise, we are saying the same thing.
Systems theory has the concepts of "gain margin" and "phase margin" -- how much you can amplify feedback or delay feedback, respectively, before your self-adjusting feedback mechanism fails to find equilibrium and turns into an oscillator.
Even though most non-engineering systems don't fit the mathematical theory, the idea that only a finite amount of gain + delay is available, and that the two are somewhat inter-convertible, generalizes astoundingly well.
> Systems theory
I'm confused. Is it a law or a theory? And no, it can't be both. Laws are proved. Theories are unproven.
"Systems Theory" in above refers to a field of study, not a name for the idea they were discussing.
And your assertion about Laws and theories is incorrect.
It may be useful, also, for you to read these two pages:
The difference between Law and Theory (in scientific discourse) is not what you believe it to be.
1. A law gives you a relationship with no mechanism. It almost always appears as a mathematical formula. You know how the pieces change with respect to another, but not why.
2. A theory gives you a mechanism. It tells you why. On rare occasions, it will not provide quantifiable predictions, in which case it is a qualitative theory.
> Laws are proved. Theories are unproven.
This is not what is used to identify something as either a law or a theory in scientific discourse.
First, "laws" may be disproven, or falsified: Newton's Law of Gravity could actually be considered disproven as it is not accurate at all scales, but it's accurate enough within its scope to continue using it. It's considered a law in the sense that it matches empirical, observed data (within certain bounds). See  for details on that. So that, right there, is a flaw in your understanding.
Second, laws do not attempt to offer an explanation of the phenomenon they describe, they offer predictive value like "a ball dropped from 50 meters will, at time t, have velocity ...". Again, Newton's Laws do not explain why gravity works, only offering a model to calculate the effect of it. This brings us to theories.
Theories are, again, falsifiable via empirical evidence (like laws), but they offer an attempt at explanation. A Theory of Gravity would try to explain why objects are attracted to each other and why the mass effects the amount of attraction. The theory can be shown as false, but like a law it can only be shown to match empirical data. This is not the same as proven.
TLDR: The distinguishing characteristic is that both attempt to predict, but theories attempt to explain. Both are falsifiable, and neither are considered proven only to match empirical data (possibly within some constraints).
“Hypothesis. Theory. Law. These scientific words get bandied about regularly, yet the general public usually gets their meaning wrong.”
And the idea is also applicable to trajectories rather than singular points: if you change your starting point by an epsilon, would the trajectory be vastly similar or a bit different? or very different? cue in Lyapunov fun and Lipschitz continuity as a metric and, to a lesser extent, conditions for chaotic trajectories to emerge.
In some systems you want to generate constant oscillations, so your system can be in stable perpetual dynamic equilibrium producing a nice sine wave.
Along these terms, I think equilibrium doesn't really make sense - it means stability in many definitions. From the top google result:
> Equilibrium is defined as a state of balance or a stable situation where opposing forces cancel each other out and where no changes are occurring. An example of equilibrium is in economics when supply and demand are equal. An example of equilibrium is when you are calm and steady.
Another take on this: (Stable vs unstable equilibrium)
sigh, someday we'll have auto-correct that just works. Doesn't even need to be AI, just use words in the current page. Heck just use contextual info such as the capitalization. Somebody, please?
In the list of anti-perfection patterns there's mechanical jitter.. a catastrophe avoiding relaxation.
If you make a production line that has perfect throughput with no buffers then you get fantastic efficiency and productivity right up until a single train hits a biter and is delayed by 30s.
Then you spend 3 hours trying to deal with every machine being out of sync with every other machine with constant start/stops :(
People often focus on one thing and overlook the sacrifice they're really making. Everything has a tradeoff.
Cache/reserve and efficency is itself complicated and situational if it helps or hurts efficiency. Overfitting it could make it fragile but it also depends upon relative costs for what a blind pursuit of efficiency goes for. If something is cheap enough like data storage there will be plentiful slack because slimming it down is irrelevant to efficiency - why bother going with custom 4KB chips when economy of scale means a 250 MB one is cheaper and better? It just isn't worth trying to cut corners there.
A laggard damped system would take longer to get into a "bad state" assuming the environment doesn't demand rapid changes as the baseline to survive. Bad state is relative as always - one can doom themselves both by leaping onto the "next big thing" which isn't and by sticking to the buggy whip and musket when others have cars and automatic rifles.
Topically, in the past week we've seen two giant companies, Adobe and Canon, lose unimaginable amounts of user data. If they had had backups, which are a form of redundancy, this would not have been a problem. But the backups were too expensive--too inefficient--and so now customer trust in their service is absolutely destroyed.
I dread the day when Google Maps, Traffic stats, Uber has actually delivered us the "perfect people transporter system" maximizing the heck out of existing infrastructure (cities, roads) and then the inevitable happens.
Systems become too big to fail.
I leave the eBay link because last I checked used copies on amazon were very pricey.
A system could be "more efficient at becoming stable," for example.
But if by "efficiency" we limit ourselves to mean "the time-cost of a set of actions," (as in the most efficient path is the one that takes the least time), we quickly encounter problems with maximizing usage of time and how that conflicts with unexpected work, which leads to the anti-stability you mentioned.
The way I think about it is that a 100% time-efficient process has zero time-flexibility. If you want to gain time-flexibility (e.g. the ability to pivot, or to work on different things too, or to introduce error bars to your calculations), you lose time-efficiency.
All of this efficiency increases financial stability. I suppose if we argue that I'm only referring to optimization and not efficiency, then perhaps it's not a great argument.
In this case, efficiency might be automating bill payments, but then you don't catch price changes, and depending on your other systems in place, you might miss overdrawing your account.
Buying in bulk is cool. If you buy a big package of paper towels, it's cheaper and you don't have to worry about running out. The fact that you have a cabinet full of paper towels isn't a big deal. But suppose you find a really great deal on a semi-trailer load of towels and stock up. Now you have your guest bedroom full of paper towels. The next week, your cousin Edgar's house burns down; you'd like to offer him and his wife a room to stay in temporarily, but you have all this paper in the way. You have lost some flexibility.
A bigger problem is, say, corporate supply chains. With just-in-time supply, you don't have to store inputs and can focus on producing outputs; it's very efficient. But then there's a pandemic or a fire in a factory somewhere, and the supply chain falls apart. Now your outputs are perhaps in greater demand, but you can't take advantage because you have no inputs. You're out of business for the duration. You can't flexibly respond.
The proper, though not easy, thing to do is have some primary sources of your materials and secondary sources to handle surges and supply issues with your primes. You may still have a production slowdown, but hopefully not a shutdown. And like with backup, if you don't use them you don't have them. You have to place orders with all sources, and use materials from all sources to ensure that they are in fact up to snuff. It'd suck to buy your RAM from Foo until a flood, and then find that BAR's RAM doesn't actually work (or doesn't work with your motherboards).
But, having a single source is more efficient than multiple sources---you can integrate tighter with their ordering system, packaging, product quirks. Until it stops working.
Having all of your sources in one country is more efficient. Until it doesn't work.
Managing all of your materials just-in-time is more efficient. Until a backhoe hits the gas main in the street outside and you can no longer get trucks into your factory and have to shut down the line for the duration.
A company with a weak (but not completely idiotic) supply chain will have a significant margin over a company paying extra for a strong supply chain. Until something goes wrong.
You do not have to live for ever, so surviving a extinction level ecvent is more than the "right" level.
Being so brittle that the failure of one delivery kills you is less than the "right" level.
How long do you want to live? And when you die, how is it going to happen?
It is pleasing to see firms go out of business in a orderly fashion, such that they have the resources to shut up the shop properly, and every one in the business gets "onto the life-rafts". But it is more common to see businesses go down in a screaming heap with huge debts and big messes for others to clean up.
When I was at business school I was taught that a firm should have the correct amount of debt such that it maximised the tax shield of interest payments and had no fat that a hostile acquirer could use to do a hostile takeover.
That was in 2007. I am quite sure the course teaches something different now!
"Sufficient" is dependent on a lot of factors, though. No one can tell you a definitive answer without knowing your system (your suppliers, your customers, your rates of production, your supply of capital to withstand a drop in production).
On the other hand, if you only buy your food day to day, that is certainly more like JIT logistics, prevents waste & storage space needs, etc., but it screws you if you can't leave your house and the stores get closed due to some... ahem... what might possibly happen that forces you to stay inside.
So it's always a matter of your frame of reference, I guess.
Depending on the cost of the insurance, that sounds to me like a drop in stability: you have infrequent periodic large payments to make instead of frequent smaller payments to make. If you had an unexpected expense arise near the time of the large insurance payment, your financial situation could get temporarily bad; if instead your insurance was small payments on a monthly basis, the unexpected expense would be easier to ride out.
[Note that I pay all of my large expenses in lump sums instead of in small trickles, but that's mostly psychology on my part not efficiency or optimization]
I would hope that the fragility of JIT supply chains was laid bare for everyone in the Covid crisis but I expect that lesson will soon be forgotten.
Add to that that if you optimize one from a set of orthogonal values, the other ones tend to decrease. And so you get to all the people claiming there's an intrinsic relation between them, on the face of a world of evidence.
On the other hand, past a point, efficiency conflicts with robustness. To maximize efficiency, you become tightly coupled to some resource and again have a hard time if that resource dries up.
You could drive 5mph on back roads with less risk of course.
there's plenty examples of this kind of thing in engineering design situations. it's cheaper (i.e. more efficient usage of capital, at least in the short run) to not allocate resources for backups or allocate extra capacity in systems that isn't used 95% of the time. it's much more expensive to dig two spatially separated trenches to lay independent paths of fibre optic cable to a given building, but if you cough up the money for that inefficient redundant connection, your internet will have decreased risk of interruption by rogue backhoes. it's cheaper to not hire enough staff and get individuals in a team to over-specialise in their own areas of knowledge rather than having enough spare capacity and knowledge-sharing to be able to cover if people get sick, go on holiday or quit.
I think this concept misses capacity. I my opinion, it is crucial that you always leave some over-capacity to have stability (lets say, you are running at most at 80% capacity). If you then increase your efficiency without sacrifying your buffer capacity, everything is fine. But as soon as your try to run at more than 80% capacity to be more efficient, this slightest problem could have devastating effects.
In general, learning how to do things better can produce efficiency and more stability too, if the way is better all around.
Sometimes you have to add a resistance (inefficiency!) in series with the output capacitor to achieve stability.
Yes, and those are the shitty things.
But when you want to improve them, make them non-shitty, you're facing a choice: stability, or high performance - pick one.
But efficiency itself is nether haste nor churn, in fact the opposite.
There is little worse than a very efficient government.
To me, this very conversation is the approach we should be taking to the major problems du jour on the planet (treating it as an incredibly complex system with an infinite number of interacting variables, many of which we do not even know exist). But it seems as if once a system reaches a certain level of complexity, we lose the ability to even realize that it is in fact a complex system, and insist upon discussing it only in simplistic terms. Or maybe it's the fact that we are embedded within the system that makes it impossible to see.