I've found "efficiency as the opposite of stability" a very powerful concept to think about - even though it's fairly simple, it seems to be almost a fundamental law.Whether it's about the economy at large, your own household, a supply chain, what have you - as soon as you optimize for efficiency by removing friction, you take all the slack/damping out of the system and become instantly more liable to catastrophic failure if some of your basic conditions change. Efficiency gives you a speed bonus, at the cost of increased risk / less resilience to unforeseen events.Stewart Brand's concept of "Pace Layering" comes to mind for how to deal with this at a systemic level - https://jods.mitpress.mit.edu/pub/issue3-brand/release/2

 > efficiency as the opposite of stabilityIn statistics, there is a slight variant of this thesis that is true in a precise formal sense: the tradeoff between efficiency and "robustness" (stability in a non-ideal situation).For example, if you have a population sample, the most efficient way to estimate the population mean from your sample is the sample mean. But if some of the data are corrupted, you're better off with a robust estimator - in this case, a trimmed mean, where the extreme N% of high and low values are discarded.The trimmed mean is less efficient in the sense that, if none of the data are corrupted, it discards information and is less accurate than the full mean. But it's more robust in the sense that it remains accurate even when a small-to-moderate % of the data are corrupted.
 i stumbled on "stability' too, because it's a static quality.rather than robustness, i prefer to use the term resilience, a dynamic quality, since efficiency is also a dynamic quality. you can trade efficiency for resilience and vice versa (as the parent poster switched to later).edit:i should add that i don't entirely agree with the thesis of the article, which exhorts us to slow down, thereby trading efficiency away for resilience. there are a number of ways to add resilience (and trade away efficiency), and in some cases, slowing down might be the best, but it's certainly not the only, or best, option in most cases.for housing, an example used in the article, we could add more housing to create resilience, which requires reducing friction, like increasing the throughput of permitting/inspections while generally reducing zoning/regulations.
 I like resilience better as well - here it just happens that the technical terms of statistics match up fairly nicely with what we're trying to say.
 akendo on Aug 22, 2020 Do you have any reference or source of this statement? Just asking out of curiosity.
 I'm completely ignorant on this topic, so I apologize for asking what must be an extremely stupid question to you, but: what makes stability a static quality whereas resilience is a dynamic quality? Are these statistical definitions that I can look up somewhere?
 Not OP, but my take is that stability is usually defined as a base-state that will continue onto perpetuity unless some outside force disrupts it. Resiliency is more closely defined as the ability to recover from disruptions back to the base-state quickly.Since the context here is that efficiency can remove layers of redundancy therefore allowing disruptions to wreck more havoc - I believe that's what OP was getting at.
 Another example would be forward error correction (adding parity bits to improve robustness at the expense of efficiency).But inefficiency isn't necessarally more robust unless the extra bits serve some purpose.
 I may be wrong, but it seems to me that 20th century (theoretical) statistics research overemphasized efficiency at the expense of robustness. My guess is that this has to do with the (over-)mathematization of statistics in the past century, as opposed to a more empirical/engineering viewpoint. Efficiency typically only holds under extremely narrow (and often impossible to check) assumptions, which is great for mathematicians proving theorems and creating theories of efficiency. On the other hand, robustness is ideally about unknown unknowns and weak assumptions, which is hard to deal with mathematically.It seems already the 21st century is seeing a more balanced emphasis on theory vs. real world applications though.
 Not my impression.The kind of outlier-culling technique suggested by civilized is not recommended these days because it adds unprincipled choice points to what Andrew Gelman calls the 'Garden of Forking Paths' [1, 2]. Thus they are bad for hypothesis testing, which tends to be what most statisticians care about.Additionally the technique obscures the relationship between the variance of the sample and the population variance if we do not have reliable knowledge of the population distribution; likewise for the mean if the mean is not close to the mode. These problems can be quite dramatic for long-tailed distributions.
 This post seems to be conflating a few different things:1. Trimming for mean estimation, which removes extreme values in an algorithmic fashion2. Subjective removal of outliers based on researcher judgment (this is the garden of forking paths Gelman talks about)3. Estimating other distributional properties, such as the variance, with trimmed estimatorsThese are all different things and come with different theoretical and practical risks and benefits. Trimmed means are perfectly good statistical tools, although they have their limitations like anything else.
 I made two separate points.The choice of N used in cutting out the N% most extreme results is not determined by widely accepted statistical best practice. Hence it is a source of forks. The algorithm might be deterministic but the choice of this parameter isn't.My discussion of distributional properties was another issue concerning this technique. You seemed to have missed the point that dropping extreme points can also lead to biased estimates of the mean.Ten years ago, dropping outliers was considered good practice in the social sciences. Today, it has become a reason for rejection in peer review. There are better techniques for dealing with noisy data, such as adding measurements to data points to measure "badness" that can then be adjusted for in a multi-level model.
 All but the simplest statistical estimators have researcher degrees of freedom (certainly including multilevel models) so it seems arbitrary to criticize the trimmed mean in particular for that "fault".Similarly, any estimator can be biased if its assumptions are violated, so I'm not sure why the potential bias of the trimmed mean in particular is an interesting point.I'm sure that social science peer reviewers have their reasons for their methodological preferences, but trimmed means are great workhorses in other areas of science, like signal processing.The critique strikes me as potentially valid in its subfield but a bit parochial if it is attempting generality.
 I don't deny the technique has its uses. The point is it is a poor technique to use if your goal is hypothesis testing, which, as I said, is what most statisticians care about.I didn't reply to you, but to goodsector, who claimed that statisticians cared focus on efficiency at the expense of reliability. I dispute this.
 I agree! Statisticians have largely led the development of robust methods, so I don't see how they can be characterized as ignoring the concern.
 This also exists in control theory as the tradeoff between performance and robustness!
 Wouldn't this be better described as a tradeoff between accuracy and robustness?Interesting concept.
 It's a bit different aspect of the same thing. If B can perform better than A given the same data, it usually means B can perform equal to A with less data. From wikipedia:> Essentially, a more efficient estimator, experiment, or test needs fewer observations than a less efficient one to achieve a given performance.
 Aha. That makes sense.
 it's called bias vs. variance tradeoff, or over-fitting, in stats/machine learning lingo.
 No, it isn't. The b/v tradeoff is not the same thing. The efficiency of an estimator is a different from its bias.
 > The efficiency of an estimator is a different from its bias.I think the comment is drawing a parallel to variance (better efficiency = lower variance). Still not exactly the same, I think, but pretty damn similar.
 They're related in that less complex models will degrade more gracefully when making predictions on novel anomalies, and that in general model complexity drives the bias-variance trade-off.But, erring on the side of efficiency in this discussion is more like over-fitting, which implies an overly complex model. It's making your model too good for one situation, such that it fails to generalize. You'd rather pull back on accuracy and choose a simpler model, in the hopes that it's more resilient to novel observations.
 That's a nice way to think about, and it reminds me of Nassim Taleb's "antifragile" thesis [1]. Basically, the world is more random than you think, and to operate rationally under uncertainty, you need to be open-minded about opportunities and risks with huge asymmetries. Fragile systems are often very successful for a long time because they ignore hidden risks and then collapse due to the unexpected.
 > Fragile systems are often very successful for a long time because they ignore hidden risks and then collapse due to the unexpected.There's another interesting aspect to this in that things that are failures from some perspectives may not be from others.If stripping resiliency out of a company nets enough savings in the short term, it may still be profitable to the owners it even if it's long-term fatal.As a hypothetical example, let's say you take a company making \$1M a year and trim \$19M a year of costs out of it. The company lasts another 10 years and then collapses. You've netted an extra \$190M out of that company, or nearly 200 years at their previous rate.In that case, it's in your local interest to strip the company bare, even if it's not necessarily optimal for your partners, workers, society, or any other stakeholder in this wonderful interconnected world of ours. The benefits are concentrated, the costs are distributed, and there's no mechanism for connecting the two.
 Even better, why not get the company to borrow against those 10 years of income, and pay you your profit now?Even better, since you now don't actually even need to keep the company alive for 10 years since you've already got your profit, you can sell the company's assets now and increase your profit!Except there's not a company in the world that this logic doesn't apply to (at some point). Asset-stripping has killed off a few old companies that were ready for it, sure. But it's also destroyed lives and communities that didn't deserve that.There is more to life than money, and one of the things that this article is talking about is that we need to recognise that.
 The key difference here is that the “communities” are not recognized with any particular ownership interest.Does the owner of the company have a right to take risks with the business? It is a serious impairment of ownership if not, and will likely lead to more-stagnant societies. The entire engine of America’s superior prosperity (even at the individual level) has been based on risk taking, while stagnant systems have their own problems (Greece, Italy come to mind with pre-covid crises) and can be a vector of corruption as the principal-agent problem remains unsolved and those entrusted with the well being of the community often work to enrich and empower themselves instead.This is not to say that we cannot say the community ought to have more of a say, this is merely to point out that there is a tradeoff that affects society in general. If we are to succeed in making this tradeoff it will be in part by better aligning the interests of business owners and the community, and we should be aware of how hard a problem that is when we go to attack it.
 > The entire engine of America’s superior prosperity (even at the individual level) has been based on risk takingThis is a story that America tells itself. It's not necessarily true.And currently the USA in in enormous debt, partly because of the vast cost of bailing out its financial institutions and large corporations. "Risk-taking" is increasingly only being done by private individuals. Large American businesses are certainly not being exposed to the results of their risks - they're being bailed out, socialising the risk but privatising the reward.In this case, if the community is shouldering the responsibility of bailing out companies that are in danger of collapsing, shouldn't there be some "impairment of ownership" as you put it? Aren't those communities entitled to ask that the company is run for their benefit too?
 rini17 on Aug 22, 2020 The option to take risks with the business is not a right, but a privilege. It brings prosperity only if exercised carefully. It is done less and less so, with more and more arrogant excuses, and sadly, it will be USA's undoing.
 indigochill on Aug 21, 2020 Although I lean towards seeing killing the company as short-sighted, suppose you then invest that money back into a new venture. You have far more capital than you would have had you let the business remain healthy. This may give you even better value in the long-run (say 200 years) if you let this new venture live. But if you kill this venture too and invest its earnings into yet a third venture...
 Right - and I think that's generally what you'd see people doing.I'm not even sure it's necessarily an intrinsically bad pattern - there can be short-term opportunities and circumstances that make a strategy worthwhile for a number of years but not further. I think the issue broadly is twofold. First is the collateral damage - companies forming and dissolving is fine for the investors but murder for the employees, who's livelihoods and health insurance become precarious. Second is that we've applied this to the entire economy in a way that makes us incredibly vulnerable to systemic shocks - see America's toilet paper supply between the months of March and July. Again, on an individual company-wide basis, it might still have been more profitable for Procter & Gamble to do whatever the hell it is they did to make 2-ply an impossible technology to reproduce domestically in 2020, but on an economy-wide basis, the fact that Everyone did it was a goddamn disaster.
 > companies forming and dissolving is fine for the investors but murder for the employees, who's livelihoods and health insurance become precariousThat's a problem with the social safety net, not a problem with companies failing.A lot of companies are going to fail even without someone actively trying to drive them out of business.Some industries just have collapses in demand and no longer do something that anyone wants. Some companies are just mismanaged.We need to provide support for the employees who are victims of companies collapsing, not try to prevent any company from ever collapsing.
 eecc on Aug 22, 2020 Yup, but that’s achieved via taxation, currently one of the costs that are being trimmed in the 19M/y example above... It’s a case of having your cake and eating it too
 roughly on Aug 22, 2020 Completely agree with this.
 dasudasu on Aug 21, 2020 What number of companies still operate in the same form for several decades? A company dying isn't necessarily a negative. The assets don't get set on fire; the people working there don't disappear into a black hole. Investors might get fleeced, but overall the assets and people are used to others end, for which investors in these ends benefit. The worst outcome are those zombie companies that only exist for the sake of existing.
 stainforth on Aug 22, 2020 Yes, invest in a mega yacht or juicero or theranos, etc.
 Only if you define local interest as purely financially motivated. I am not even talking about the ethics, but even social status is not only dependent on wealth. We are much more than a bank account number, thinking that we are is more or less a pathological disease, because rationally it makes no sense.
 > Only if you define local interest as purely financially motivated <...> social status is not only dependent on wealthFor an awful lot of people, it basically is - I mean, practically the entire field of finance and professional management would look at the example I gave and say, "that's exactly the right thing to do." Mitt Romney's entire professional career follows that principle, and it's largely seen as a net positive to his political career (as a Republican).I agree with you, I think it's an enormously damaging philosophy, both to the bearer and to the rest of us, but again, it may be locally optimal.
 I don‘t know, I am not a fan of Mitt Romney but his social status seemed to be more dependent on being a successful governor and managing the olympics. Both are net positives for society, his business record actually was his weakness.
 xncl on Aug 22, 2020 Could you help me understand this a little better and define what you mean by local interest?
 I'm not using a rigorous definition, but essentially, consider two outcomes, each affecting 100 people:`````` 1. Every person accrues \$10 2. One person, "Bob", accrues \$100, everyone else accrues \$1 `````` Generally speaking, outcome 1 has higher overall benefits, but outcome 2 is better for Bob. If Bob is the decision-maker, it's in Bob's interest to pick outcome 2, even if it's globally worse.This is what I mean by local vs global interest - roughly, "in the interest of a given individual" vs "the best outcome across all individuals."
 rytill on Aug 22, 2020 With respect to your 200 years figure, \$190M also nets \$3.8M/yr investing very conservatively. Even \$50M once would usually be valued higher than \$1M/yr forever, in a time-value-of-money sense.
 Behold! Private Equity
 I wrote a post very much on this subject:https://wearenotsaved.com/2020/03/27/the-fragility-of-effici...
 That'd be a good post for HN, especially since the topic of efficiency/robustness has come up a few times recently.
 One of my main takeaways from Antifragile was that the people often involved with making processes efficient have no business being in that position. He was right to label management science as quackery and practitioners as charlatans.
 Reminds me of Machiavelli’s comments on the French state (many small warlords) vs. the Ottoman state (single supreme leader.) The French state was less efficient, but more resilient and difficult to conquer, while the Ottoman state had more efficiency but was highly fragile.In some cases the old king of the conquered kingdom depended on his lords. 16th century France, or in other words France as it was at the time of writing of The Prince, is given by Machiavelli as an example of such a kingdom. These are easy to enter but difficult to hold.When the kingdom revolves around the king, with everyone else his servant, then it is difficult to enter but easy to hold. The solution is to eliminate the old bloodline of the prince. Machiavelli used the Persian empire of Darius III, conquered by Alexander the Great, to illustrate this point and then noted that the Medici, if they think about it, will find this historical example similar to the "kingdom of the Turk" (Ottoman Empire) in their time – making this a potentially easier conquest to hold than France would be.
 > The French state was less efficient, but more resilient and difficult to conquer, while the Ottoman state had more efficiency but was highly fragile.Machiavelli is saying the opposite of what you think he's saying.He's saying that France's governmental structure makes it (relatively) easy to conquer. Because there are many quasi-independent, competing fiefs, an invader is not necessarily facing a unified front, and may in fact be able to recruit dissatisfied lords to their cause. But that doesn't make it the kind of place you'd want to rule, because once you've conquered it, it's (relatively) easy for another invader to conquer you for the same reasons.In contrast, there were no fiefdoms in Persia. Unlike the lords in France, the regional rulers in Persia were chosen by the state, and picked for their loyalty. When invading Persia, you are far more likely to face a united front, making it (relatively) difficult to conquer. That said, once you've conquered it, it would be (relatively) easy to hold for the same reasons.
 French states are not easy to conquer, they are easy to enter, for the reasons you’ve stated.Otherwise, we are saying the same thing.
 So France was its own Vietnam. Fitting.
 It is a fundamental law.Systems theory has the concepts of "gain margin" and "phase margin" -- how much you can amplify feedback or delay feedback, respectively, before your self-adjusting feedback mechanism fails to find equilibrium and turns into an oscillator.Even though most non-engineering systems don't fit the mathematical theory, the idea that only a finite amount of gain + delay is available, and that the two are somewhat inter-convertible, generalizes astoundingly well.
 Do you happen to know a good introduction to that? I‘d love to read up more on it!
 Steve Brunton, a University of Washington professor has a sweet lecture series called the Control Bootcamp that dives deep into control fundamentals while still being approachable. Great production values too: https://www.youtube.com/watch?v=Pi7l8mMjYVE&list=PLMrJAkhIeN...
 p-funk on Aug 21, 2020 This youtube channel has (seemingly good quality) Khan Academy style lecture videos on control systems, if you just want a quick introduction to the concepts.https://www.youtube.com/watch?v=ThoA4amCAX4
 dboreham on Aug 21, 2020
 Yeah, as a controls engineer I’ve worked on systems where the main requirement was not gain or phase margin, but a root sum square of the two. Nichols plots drive home the idea that it’s not really the margin at two discrete points that matter, but the close approach to the critical point.
 > It is a fundamental law.> Systems theoryI'm confused. Is it a law or a theory? And no, it can't be both. Laws are proved. Theories are unproven.
 ska on Aug 21, 2020 You are confused, twofold:"Systems Theory" in above refers to a field of study, not a name for the idea they were discussing.And your assertion about Laws and theories is incorrect.
 Jtsummers on Aug 21, 2020 Well, those two statements aren't referring to the same thing. The law they reference is something found in Systems Theory.It may be useful, also, for you to read these two pages:https://en.wikipedia.org/wiki/Scientific_lawhttps://en.wikipedia.org/wiki/Scientific_theoryThe difference between Law and Theory (in scientific discourse) is not what you believe it to be.
 > The difference between Law and Theory (in scientific discourse) is not what you believe it to be.ELI5?
 Jtsummers' explanation is correct, but I wanted to weigh in with an easy rule of thumb, which you may find easier to remember:1. A law gives you a relationship with no mechanism. It almost always appears as a mathematical formula. You know how the pieces change with respect to another, but not why.2. A theory gives you a mechanism. It tells you why. On rare occasions, it will not provide quantifiable predictions, in which case it is a qualitative theory.
 Jtsummers on Aug 22, 2020 Well, I'm not interested in explaining this to a five-year old, but I'll treat you as an adult and use words over two syllables.> Laws are proved. Theories are unproven.This is not what is used to identify something as either a law or a theory in scientific discourse.First, "laws" may be disproven, or falsified: Newton's Law of Gravity could actually be considered disproven as it is not accurate at all scales, but it's accurate enough within its scope to continue using it. It's considered a law in the sense that it matches empirical, observed data (within certain bounds). See [0] for details on that. So that, right there, is a flaw in your understanding.Second, laws do not attempt to offer an explanation of the phenomenon they describe, they offer predictive value like "a ball dropped from 50 meters will, at time t, have velocity ...". Again, Newton's Laws do not explain why gravity works, only offering a model to calculate the effect of it. This brings us to theories.Theories are, again, falsifiable via empirical evidence (like laws), but they offer an attempt at explanation. A Theory of Gravity would try to explain why objects are attracted to each other and why the mass effects the amount of attraction. The theory can be shown as false, but like a law it can only be shown to match empirical data. This is not the same as proven.TLDR: The distinguishing characteristic is that both attempt to predict, but theories attempt to explain. Both are falsifiable, and neither are considered proven only to match empirical data (possibly within some constraints).
 igravious on Aug 21, 2020 Your confusion is understandable. This misunderstanding surfaces every now and then on HN.“Hypothesis. Theory. Law. These scientific words get bandied about regularly, yet the general public usually gets their meaning wrong.”https://www.scientificamerican.com/article/just-a-theory-7-m...
 I think this is probably better characterized as "efficiency erodes resilience". You can have stability if there are no perturbations. However, if there are, and you have optimized for a regime where there are not, you are very exposed to risk. This is pretty much Table's notion of antifragility as well as the study of resilience engineering.
 There's an engineering version of "stable" that might be useful here to draw lines between the three or so different concepts being discussed. A "stable" system is one that will return to the same resting state when perturbed. One can have "equilibrium" in an unstable system, e.g. balancing a broomstick on one's hand, but the system will not return to that equilibrium if perturbed.
 This idea also helps when thinking about local/global (I'd argue most of what we do is local optimization of minima, and what challenges us is when the assumptions about locality of the phenomenon are challenged enough that we can't guarantee that stability).And the idea is also applicable to trajectories rather than singular points: if you change your starting point by an epsilon, would the trajectory be vastly similar or a bit different? or very different? cue in Lyapunov fun and Lipschitz continuity[0] as a metric and, to a lesser extent, conditions for chaotic trajectories to emerge.
 It's not a binary opposition. In control theory you can make a system unconditionally stable, or you can allow a little oscillation around the ideal, or you can make it unconditionally unstable so the tiniest hint of noise drives it to an extreme.In some systems you want to generate constant oscillations, so your system can be in stable perpetual dynamic equilibrium producing a nice sine wave.
 SubuSS on Aug 21, 2020 I think you mean resilient when you say stable: A stable system NEED not get back to resting state by itself. A resilient one does.Along these terms, I think equilibrium doesn't really make sense - it means stability in many definitions. From the top google result:> Equilibrium is defined as a state of balance or a stable situation where opposing forces cancel each other out and where no changes are occurring. An example of equilibrium is in economics when supply and demand are equal. An example of equilibrium is when you are calm and steady.Another take on this: (Stable vs unstable equilibrium) https://web.ma.utexas.edu/users/davis/375/popecol/lec9/equil....
 > Table's notion of antifragilitysigh, someday we'll have auto-correct that just works. Doesn't even need to be AI, just use words in the current page. Heck just use contextual info such as the capitalization. Somebody, please?
 shoo on Aug 21, 2020 "efficiency erodes resilience" is a good line, at a loss of symmetry. resilience also erodes efficiency.
 Yes it's basically a myopic under-appreciation of the current system. I was guilty of that, many are I believe, we all strive for better but sometimes our perspective is off. (not to bring him up on every topic but Alan Kay said that perspective was worth a lot of IQ points)In the list of anti-perfection patterns there's mechanical jitter.. a catastrophe avoiding relaxation.
 Relieved to learn that other people dwell on this as well. My anxiety stems from the idea that modern corporations are incentivized to ruthlessly optimize for efficiency--short-term gains--thereby outcompeting corporations that are structured for longer-term outlooks by engineering redundancy (which you call stability) into their processes. I don't know how to begin to incentivize the idea that efficiency is not the end-all.
 See also "Slack", a book about this very issue.https://www.penguinrandomhouse.com/books/39276/slack-by-tom-...
 It's funny, I've been sucked into factorio after the 1.0 release announcement and this really rings true.If you make a production line that has perfect throughput with no buffers then you get fantastic efficiency and productivity right up until a single train hits a biter and is delayed by 30s.Then you spend 3 hours trying to deal with every machine being out of sync with every other machine with constant start/stops :(
 There's a similar relationship between efficiency (convenience) and security (safety). Examples are everywhere. A centralized system (efficient) is a SPOF (not safe). Aggressive caching can be fast but unreliable. Adding a lot of features in a short term makes the code unstable, etc, etc.People often focus on one thing and overlook the sacrifice they're really making. Everything has a tradeoff.
 That concept of efficency as opposite of stability seems a bit fallacious in the strong case - inefficency itself can and has caused systems to collapse which is the exact opposite of stability.Cache/reserve and efficency is itself complicated and situational if it helps or hurts efficiency. Overfitting it could make it fragile but it also depends upon relative costs for what a blind pursuit of efficiency goes for. If something is cheap enough like data storage there will be plentiful slack because slimming it down is irrelevant to efficiency - why bother going with custom 4KB chips when economy of scale means a 250 MB one is cheaper and better? It just isn't worth trying to cut corners there.A laggard damped system would take longer to get into a "bad state" assuming the environment doesn't demand rapid changes as the baseline to survive. Bad state is relative as always - one can doom themselves both by leaping onto the "next big thing" which isn't and by sticking to the buggy whip and musket when others have cars and automatic rifles.
 I think you're both talking about different ideas here. Efficiency is good. But redundancy is also good (necessary, even, for a resilient system), and the problem is that you can always increase efficiency by removing redundancy, so it does get removed by short-sighted efficiency-optimizers.Topically, in the past week we've seen two giant companies, Adobe and Canon, lose unimaginable amounts of user data. If they had had backups, which are a form of redundancy, this would not have been a problem. But the backups were too expensive--too inefficient--and so now customer trust in their service is absolutely destroyed.
 If you are strong everywhere you are weak everywhere. The article is you plain out wrong. The problem is the lack of a strategic reserve, not efficiency in itself.
 Similar to your argument, but another way to think is that efficiency adds dependency on a higher form of skill (ability to multi-task) or a complicated system.I dread the day when Google Maps, Traffic stats, Uber has actually delivered us the "perfect people transporter system" maximizing the heck out of existing infrastructure (cities, roads) and then the inevitable happens.Systems become too big to fail.
 That’s one of the points of “The Goal”. As in you need slack to be more productive over time. https://www.ebay.com/p/117280427I leave the eBay link because last I checked used copies on amazon were very pricey.
 It's still in print so you can just get a new copy instead.
 I think of this as an optimization kind of problem. The word "efficiency" itself is only meaningful in context of what's being made more efficient.A system could be "more efficient at becoming stable," for example.But if by "efficiency" we limit ourselves to mean "the time-cost of a set of actions," (as in the most efficient path is the one that takes the least time), we quickly encounter problems with maximizing usage of time and how that conflicts with unexpected work, which leads to the anti-stability you mentioned.The way I think about it is that a 100% time-efficient process has zero time-flexibility. If you want to gain time-flexibility (e.g. the ability to pivot, or to work on different things too, or to introduce error bars to your calculations), you lose time-efficiency.
 Dr. Richard Hamming said in his lecture series “Life Long Learning” given at the Naval Postgraduate School: a perfectly engineered bridge would collapse if an extra pound was driven across it. I used to think this was a joke, or at least said in jest. Nope.
 When I think about personal finance, I often think about efficiency. The obvious examples include buying in bulk, avoiding finance charges to optimize what you get for your money, paying insurance up front when the payment options charge extra, and just plain having fewer subscriptions to keep monthly expenses lower and easier to track.All of this efficiency increases financial stability. I suppose if we argue that I'm only referring to optimization and not efficiency, then perhaps it's not a great argument.
 The two are certainly related but it feels like optimization is a bit different. Maybe the efficiency equivalent for personal finance would be an example like keeping your bank balance at a minimum and immediately transferring any spare cash to paying down a mortgage or otherwise into a fairly illiquid investment because thet's where the best returns are. But now if you have an unexpected expense you have to scramble to come up with funds.
 Ah yes, I think that makes it more clear.In this case, efficiency might be automating bill payments, but then you don't catch price changes, and depending on your other systems in place, you might miss overdrawing your account.
 The colloquial words may not be the best. It seems like it's a distinction between optimizing things for which there is essentially no meaningful downside except maybe a bit of thought and time. Versus setting things up so that they're the most efficient system in the current environment but may not handle unexpected events well and therefore might not be the best choice.
 Rather than stability, think of it as robustness or flexibility. Past a certain point, efficiency is an enemy of robustness.Buying in bulk is cool. If you buy a big package of paper towels, it's cheaper and you don't have to worry about running out. The fact that you have a cabinet full of paper towels isn't a big deal. But suppose you find a really great deal on a semi-trailer load of towels and stock up. Now you have your guest bedroom full of paper towels. The next week, your cousin Edgar's house burns down; you'd like to offer him and his wife a room to stay in temporarily, but you have all this paper in the way. You have lost some flexibility.A bigger problem is, say, corporate supply chains. With just-in-time supply, you don't have to store inputs and can focus on producing outputs; it's very efficient. But then there's a pandemic or a fire in a factory somewhere, and the supply chain falls apart. Now your outputs are perhaps in greater demand, but you can't take advantage because you have no inputs. You're out of business for the duration. You can't flexibly respond.
 JIT isn't the problem in your last example, it's a weak supply chain. If you have a singular source for some critical or necessary thing, you have a risk whether you're using JIT concepts or not. Though higher if you take JIT to an extreme. Your sources need to be that, sources, and ideally geographically separated. See the various industry crises following natural disasters in Taiwan for why.The proper, though not easy, thing to do is have some primary sources of your materials and secondary sources to handle surges and supply issues with your primes. You may still have a production slowdown, but hopefully not a shutdown. And like with backup, if you don't use them you don't have them. You have to place orders with all sources, and use materials from all sources to ensure that they are in fact up to snuff. It'd suck to buy your RAM from Foo until a flood, and then find that BAR's RAM doesn't actually work (or doesn't work with your motherboards).
 Right.But, having a single source is more efficient than multiple sources---you can integrate tighter with their ordering system, packaging, product quirks. Until it stops working.Having all of your sources in one country is more efficient. Until it doesn't work.Managing all of your materials just-in-time is more efficient. Until a backhoe hits the gas main in the street outside and you can no longer get trucks into your factory and have to shut down the line for the duration.A company with a weak (but not completely idiotic) supply chain will have a significant margin over a company paying extra for a strong supply chain. Until something goes wrong.
 I guess my point is that, in everything I've read and experienced, JIT isn't about efficiency to an extreme (drop to 1 supplier, have 1 uberefficient factory, have 1 expert). You still maintain a buffer, and if possible multiple suppliers. The source of JIT (in manufacturing) is primarily Lean or the Toyota Production System. Those do not endorse zero-buffers or sole-sourcing for the purposes of efficiency. The purpose of reducing buffers is to expose inefficiencies elsewhere. And you find the right level of buffering for you based on your suppliers and risk tolerance.
 But it is not simple to find the "right" level.You do not have to live for ever, so surviving a extinction level ecvent is more than the "right" level.Being so brittle that the failure of one delivery kills you is less than the "right" level.How long do you want to live? And when you die, how is it going to happen?It is pleasing to see firms go out of business in a orderly fashion, such that they have the resources to shut up the shop properly, and every one in the business gets "onto the life-rafts". But it is more common to see businesses go down in a screaming heap with huge debts and big messes for others to clean up.When I was at business school I was taught that a firm should have the correct amount of debt such that it maximised the tax shield of interest payments and had no fat that a hostile acquirer could use to do a hostile takeover.That was in 2007. I am quite sure the course teaches something different now!
 rdtwo on Aug 21, 2020 Just in time is synonymous with always late. Might work well with simple fungible goods but breaks down on complex stuff
 complex stuff like building cars?
 I think but you’d have to ask insiders if it works for them or inspite of it. In my industry it the latter where product arrives late and there is enormous recovery costs that are rarely mentioned or properly accounted for, everybody up top just sort of pretends it’s not routine when it actually is. Typically these sorts of things don’t make it out to the general public in press releases though. Holding a bit more product and some spares would have been far cheaper then the constant recovery effort.
 A quality of most JIT systems I've experienced (and texts on the topic) is maintaining a sufficient buffer (alluded to in your last sentence). Taking it to its extreme (that is, essentially 0 inventory buffer) JIT only works if everything else works perfectly. That is an extreme position not endorsed by anyone or any text I have ever encountered. But it does remind me of the shitty takes on Agile I've seen (no planning, in particular). So seeing it in the wild would not surprise me."Sufficient" is dependent on a lot of factors, though. No one can tell you a definitive answer without knowing your system (your suppliers, your customers, your rates of production, your supply of capital to withstand a drop in production).
 funny that you mention shitty agile we do that too
 It increases your financial stability under the assumption that your life situation evolves in a predictable way. Say you need to cover unforeseen medical expenses, or you develop an allergy to one of the foods you bought in bulk (sorry, that one was a bit contrived), etc. - well, then you'd suddenly wish you'd held off on buying that pallet of canned soup.On the other hand, if you only buy your food day to day, that is certainly more like JIT logistics, prevents waste & storage space needs, etc., but it screws you if you can't leave your house and the stores get closed due to some... ahem... what might possibly happen that forces you to stay inside.So it's always a matter of your frame of reference, I guess.
 > paying insurance up front when the payment options charge extraDepending on the cost of the insurance, that sounds to me like a drop in stability: you have infrequent periodic large payments to make instead of frequent smaller payments to make. If you had an unexpected expense arise near the time of the large insurance payment, your financial situation could get temporarily bad; if instead your insurance was small payments on a monthly basis, the unexpected expense would be easier to ride out.[Note that I pay all of my large expenses in lump sums instead of in small trickles, but that's mostly psychology on my part not efficiency or optimization]
 Whether it's about the economy at large, your own household, a supply chain, what have you - as soon as you optimize for efficiency by removing friction, you take all the slack/damping out of the system and become instantly more liable to catastrophic failure if some of your basic conditions changeI would hope that the fragility of JIT supply chains was laid bare for everyone in the Covid crisis but I expect that lesson will soon be forgotten.
 Reminds me of something six sigma manufacturing types call "building monuments." This refers to over-investing in optimization and automation to the point that there is too much sunk cost. As soon as something changes you're in trouble and all that sunk cost is gone, or even worse you can be stuck and unable to produce until you've retooled significantly.
 Maybe the problem is that efficiency and robustness are orthogonal?
 If you think about the two-dimensional efficiency/robustness space you can pretty easily see how it's isomorphic to the return/volatility space by way of simple transformations. If yourself to be convinced that such a bridge exists you can also bring tools in the return/volatility space back into the efficiency/robustness space by applying the appropriate inverse transformations. Maybe the conclusions are common sense, but I'd still read a blog framing that process by way of analogy.
 Interesting idea, thanks for sharing.
 aaron-santos on Aug 21, 2020 EDIT: If you allow yourself.. :/
 They're orthogonal for non-trivial decisions. Once you hit the efficient frontier of anything, you need to start making trade-offs.
 That's a much more reasonable statement.Add to that that if you optimize one from a set of orthogonal values, the other ones tend to decrease. And so you get to all the people claiming there's an intrinsic relation between them, on the face of a world of evidence.
 They at least aren't in direct tension.
 Up to a point, being inefficient conflicts with robustness. If you're profligately wasting a resource, you are in for a hard time if that resource dries up.On the other hand, past a point, efficiency conflicts with robustness. To maximize efficiency, you become tightly coupled to some resource and again have a hard time if that resource dries up.
 I think this requires more thorough definition. Obviously you can be inefficient without being stable. E.g. I could drive my car at 5 miles an hour to work which would neither be efficient nor stable.
 It certainly would be stable. A very moderate increase in speed would allow you to make good on a substantial delay for any kind of reason.
 It would not be stable as it would significantly increase the risk of accidents, road rage, or being pulled over for going too slow.
 That instability is not because of the 5mph but because you're driving 5mph in a 60mph zone.You could drive 5mph on back roads with less risk of course.
 shoo on Aug 21, 2020 agreed. the two objectives of efficiency and robustness are not necessarily in conflict in all situations, but if you start trying to optimise a given system focusing on only one objective, the other one may degrade arbitrarily. better to define what tradeoff would be a good deal: how much efficiency would you be willing to lose to gain 1 unit of robustness, etc. then optimise both objectives taking into account your preferred exchange rate.there's plenty examples of this kind of thing in engineering design situations. it's cheaper (i.e. more efficient usage of capital, at least in the short run) to not allocate resources for backups or allocate extra capacity in systems that isn't used 95% of the time. it's much more expensive to dig two spatially separated trenches to lay independent paths of fibre optic cable to a given building, but if you cough up the money for that inefficient redundant connection, your internet will have decreased risk of interruption by rogue backhoes. it's cheaper to not hire enough staff and get individuals in a team to over-specialise in their own areas of knowledge rather than having enough spare capacity and knowledge-sharing to be able to cover if people get sick, go on holiday or quit.
 Right, there has to be some boundaries defined for what's in "reason".
 > I've found "efficiency as the opposite of stability" a very powerful concept to think aboutI think this concept misses capacity. I my opinion, it is crucial that you always leave some over-capacity to have stability (lets say, you are running at most at 80% capacity). If you then increase your efficiency without sacrifying your buffer capacity, everything is fine. But as soon as your try to run at more than 80% capacity to be more efficient, this slightest problem could have devastating effects.
 Efficiency is a broad concept. When it comes to energy, I don't consider stability incongruous with efficiency at all: the less you waste over time, the more stable your electricity demand growth, which saves money and improves QoL for citizens.In general, learning how to do things better can produce efficiency and more stability too, if the way is better all around.
 This is also the primary topic of “blue ocean strategy” — with a difference there that running without slack causes the system to grind to a halt and or run in “emergency rush order mode” much of the time. In such a case efficiency and resilience/stability have some linear dependence. Sometimes these “opposites” are actually working together.
 Apparently also the reason why plants forgo green light (i.e. the most abundant part of the solar spectrum): https://www.quantamagazine.org/why-are-plants-green-to-reduc...
 This efficiency vs. resilience trade off seems to be a general pattern, also for organic systems.https://www.quantamagazine.org/why-are-plants-green-to-reduc...
 That makes sense. If efficiency is achieving maximum ROI, then the best ROI activies are generally the highest risk ones, which have the most chance of failing and are thus least stable.
 The rule holds true for switchmode power supplies.Sometimes you have to add a resistance (inefficiency!) in series with the output capacitor to achieve stability.
 But even more effective and efficient to use a transfer resistor
 "stability" strikes me as the wrong word there; maybe "resilience" or "flexibility"?
 I'd say resilience instead of stability, but I agree that it's worth thinking about in many contexts.
 Very interesting thought! Although I’d probably call it anti-fragility or resilience instead of stability.
 Another way I like to state this is "bottlenecks are beautiful design devices".
 Following this mindset, blockchain can be described as low efficiency and high stability?
 There's always someone making weird correlations between Blockchain/Cryptocurrency and some random idea.
 This isn’t really too correct. A lot of things are neither stable nor efficient.
 > A lot of things are neither stable nor efficient.Yes, and those are the shitty things.But when you want to improve them, make them non-shitty, you're facing a choice: stability, or high performance - pick one.
 xwdv on Aug 22, 2020 Life is short and we’re not building generational ships so let’s just go fast and efficient and if it crashes it crashes, but by then the lion’s share of profits will have been made and maybe even spent.
 I can't help but think of government (or management).
 The best efficiency arises from simplicity. Yes, haste makes waste, and a system that's constantly changing will be unstable.But efficiency itself is nether haste nor churn, in fact the opposite.
 But isn't this a subtle diss against capitalism or a purely market based system? A free market results in _efficient_ allocation of capital, resulting in a _fragile_ allocation.
 Also in government. It’s really good that they are slow and inefficient (although it would be nicer if they were less wasteful).There is little worse than a very efficient government.
 A very efficient singular planetary government (or, excessive international homogenization of approaches) seems like one plausible example.To me, this very conversation is the approach we should be taking to the major problems du jour on the planet (treating it as an incredibly complex system with an infinite number of interacting variables, many of which we do not even know exist). But it seems as if once a system reaches a certain level of complexity, we lose the ability to even realize that it is in fact a complex system, and insist upon discussing it only in simplistic terms. Or maybe it's the fact that we are embedded within the system that makes it impossible to see.

Search: