Whether it's about the economy at large, your own household, a supply chain, what have you - as soon as you optimize for efficiency by removing friction, you take all the slack/damping out of the system and become instantly more liable to catastrophic failure if some of your basic conditions change. Efficiency gives you a speed bonus, at the cost of increased risk / less resilience to unforeseen events.
Stewart Brand's concept of "Pace Layering" comes to mind for how to deal with this at a systemic level - https://jods.mitpress.mit.edu/pub/issue3-brand/release/2
In statistics, there is a slight variant of this thesis that is true in a precise formal sense: the tradeoff between efficiency and "robustness" (stability in a non-ideal situation).
For example, if you have a population sample, the most efficient way to estimate the population mean from your sample is the sample mean. But if some of the data are corrupted, you're better off with a robust estimator - in this case, a trimmed mean, where the extreme N% of high and low values are discarded.
The trimmed mean is less efficient in the sense that, if none of the data are corrupted, it discards information and is less accurate than the full mean. But it's more robust in the sense that it remains accurate even when a small-to-moderate % of the data are corrupted.
rather than robustness, i prefer to use the term resilience, a dynamic quality, since efficiency is also a dynamic quality. you can trade efficiency for resilience and vice versa (as the parent poster switched to later).
i should add that i don't entirely agree with the thesis of the article, which exhorts us to slow down, thereby trading efficiency away for resilience. there are a number of ways to add resilience (and trade away efficiency), and in some cases, slowing down might be the best, but it's certainly not the only, or best, option in most cases.
for housing, an example used in the article, we could add more housing to create resilience, which requires reducing friction, like increasing the throughput of permitting/inspections while generally reducing zoning/regulations.
Since the context here is that efficiency can remove layers of redundancy therefore allowing disruptions to wreck more havoc - I believe that's what OP was getting at.
But inefficiency isn't necessarally more robust unless the extra bits serve some purpose.
It seems already the 21st century is seeing a more balanced emphasis on theory vs. real world applications though.
The kind of outlier-culling technique suggested by civilized is not recommended these days because it adds unprincipled choice points to what Andrew Gelman calls the 'Garden of Forking Paths' [1, 2]. Thus they are bad for hypothesis testing, which tends to be what most statisticians care about.
Additionally the technique obscures the relationship between the variance of the sample and the population variance if we do not have reliable knowledge of the population distribution; likewise for the mean if the mean is not close to the mode. These problems can be quite dramatic for long-tailed distributions.
1. Trimming for mean estimation, which removes extreme values in an algorithmic fashion
2. Subjective removal of outliers based on researcher judgment (this is the garden of forking paths Gelman talks about)
3. Estimating other distributional properties, such as the variance, with trimmed estimators
These are all different things and come with different theoretical and practical risks and benefits. Trimmed means are perfectly good statistical tools, although they have their limitations like anything else.
The choice of N used in cutting out the N% most extreme results is not determined by widely accepted statistical best practice. Hence it is a source of forks. The algorithm might be deterministic but the choice of this parameter isn't.
My discussion of distributional properties was another issue concerning this technique. You seemed to have missed the point that dropping extreme points can also lead to biased estimates of the mean.
Ten years ago, dropping outliers was considered good practice in the social sciences. Today, it has become a reason for rejection in peer review. There are better techniques for dealing with noisy data, such as adding measurements to data points to measure "badness" that can then be adjusted for in a multi-level model.
Similarly, any estimator can be biased if its assumptions are violated, so I'm not sure why the potential bias of the trimmed mean in particular is an interesting point.
I'm sure that social science peer reviewers have their reasons for their methodological preferences, but trimmed means are great workhorses in other areas of science, like signal processing.
The critique strikes me as potentially valid in its subfield but a bit parochial if it is attempting generality.
I didn't reply to you, but to goodsector, who claimed that statisticians cared focus on efficiency at the expense of reliability. I dispute this.
> Essentially, a more efficient estimator, experiment, or test needs fewer observations than a less efficient one to achieve a given performance.
I think the comment is drawing a parallel to variance (better efficiency = lower variance). Still not exactly the same, I think, but pretty damn similar.
But, erring on the side of efficiency in this discussion is more like over-fitting, which implies an overly complex model. It's making your model too good for one situation, such that it fails to generalize. You'd rather pull back on accuracy and choose a simpler model, in the hopes that it's more resilient to novel observations.
There's another interesting aspect to this in that things that are failures from some perspectives may not be from others.
If stripping resiliency out of a company nets enough savings in the short term, it may still be profitable to the owners it even if it's long-term fatal.
As a hypothetical example, let's say you take a company making $1M a year and trim $19M a year of costs out of it. The company lasts another 10 years and then collapses. You've netted an extra $190M out of that company, or nearly 200 years at their previous rate.
In that case, it's in your local interest to strip the company bare, even if it's not necessarily optimal for your partners, workers, society, or any other stakeholder in this wonderful interconnected world of ours. The benefits are concentrated, the costs are distributed, and there's no mechanism for connecting the two.
Even better, since you now don't actually even need to keep the company alive for 10 years since you've already got your profit, you can sell the company's assets now and increase your profit!
Except there's not a company in the world that this logic doesn't apply to (at some point). Asset-stripping has killed off a few old companies that were ready for it, sure. But it's also destroyed lives and communities that didn't deserve that.
There is more to life than money, and one of the things that this article is talking about is that we need to recognise that.
Does the owner of the company have a right to take risks with the business? It is a serious impairment of ownership if not, and will likely lead to more-stagnant societies. The entire engine of America’s superior prosperity (even at the individual level) has been based on risk taking, while stagnant systems have their own problems (Greece, Italy come to mind with pre-covid crises) and can be a vector of corruption as the principal-agent problem remains unsolved and those entrusted with the well being of the community often work to enrich and empower themselves instead.
This is not to say that we cannot say the community ought to have more of a say, this is merely to point out that there is a tradeoff that affects society in general. If we are to succeed in making this tradeoff it will be in part by better aligning the interests of business owners and the community, and we should be aware of how hard a problem that is when we go to attack it.
This is a story that America tells itself. It's not necessarily true.
And currently the USA in in enormous debt, partly because of the vast cost of bailing out its financial institutions and large corporations. "Risk-taking" is increasingly only being done by private individuals. Large American businesses are certainly not being exposed to the results of their risks - they're being bailed out, socialising the risk but privatising the reward.
In this case, if the community is shouldering the responsibility of bailing out companies that are in danger of collapsing, shouldn't there be some "impairment of ownership" as you put it? Aren't those communities entitled to ask that the company is run for their benefit too?
I'm not even sure it's necessarily an intrinsically bad pattern - there can be short-term opportunities and circumstances that make a strategy worthwhile for a number of years but not further. I think the issue broadly is twofold. First is the collateral damage - companies forming and dissolving is fine for the investors but murder for the employees, who's livelihoods and health insurance become precarious. Second is that we've applied this to the entire economy in a way that makes us incredibly vulnerable to systemic shocks - see America's toilet paper supply between the months of March and July. Again, on an individual company-wide basis, it might still have been more profitable for Procter & Gamble to do whatever the hell it is they did to make 2-ply an impossible technology to reproduce domestically in 2020, but on an economy-wide basis, the fact that Everyone did it was a goddamn disaster.
That's a problem with the social safety net, not a problem with companies failing.
A lot of companies are going to fail even without someone actively trying to drive them out of business.
Some industries just have collapses in demand and no longer do something that anyone wants. Some companies are just mismanaged.
We need to provide support for the employees who are victims of companies collapsing, not try to prevent any company from ever collapsing.
For an awful lot of people, it basically is - I mean, practically the entire field of finance and professional management would look at the example I gave and say, "that's exactly the right thing to do." Mitt Romney's entire professional career follows that principle, and it's largely seen as a net positive to his political career (as a Republican).
I agree with you, I think it's an enormously damaging philosophy, both to the bearer and to the rest of us, but again, it may be locally optimal.
1. Every person accrues $10
2. One person, "Bob", accrues $100, everyone else accrues $1
This is what I mean by local vs global interest - roughly, "in the interest of a given individual" vs "the best outcome across all individuals."
In some cases the old king of the conquered kingdom depended on his lords. 16th century France, or in other words France as it was at the time of writing of The Prince, is given by Machiavelli as an example of such a kingdom. These are easy to enter but difficult to hold.
When the kingdom revolves around the king, with everyone else his servant, then it is difficult to enter but easy to hold. The solution is to eliminate the old bloodline of the prince. Machiavelli used the Persian empire of Darius III, conquered by Alexander the Great, to illustrate this point and then noted that the Medici, if they think about it, will find this historical example similar to the "kingdom of the Turk" (Ottoman Empire) in their time – making this a potentially easier conquest to hold than France would be.
Machiavelli is saying the opposite of what you think he's saying.
He's saying that France's governmental structure makes it (relatively) easy to conquer. Because there are many quasi-independent, competing fiefs, an invader is not necessarily facing a unified front, and may in fact be able to recruit dissatisfied lords to their cause. But that doesn't make it the kind of place you'd want to rule, because once you've conquered it, it's (relatively) easy for another invader to conquer you for the same reasons.
In contrast, there were no fiefdoms in Persia. Unlike the lords in France, the regional rulers in Persia were chosen by the state, and picked for their loyalty. When invading Persia, you are far more likely to face a united front, making it (relatively) difficult to conquer. That said, once you've conquered it, it would be (relatively) easy to hold for the same reasons.
Otherwise, we are saying the same thing.
Systems theory has the concepts of "gain margin" and "phase margin" -- how much you can amplify feedback or delay feedback, respectively, before your self-adjusting feedback mechanism fails to find equilibrium and turns into an oscillator.
Even though most non-engineering systems don't fit the mathematical theory, the idea that only a finite amount of gain + delay is available, and that the two are somewhat inter-convertible, generalizes astoundingly well.
> Systems theory
I'm confused. Is it a law or a theory? And no, it can't be both. Laws are proved. Theories are unproven.
"Systems Theory" in above refers to a field of study, not a name for the idea they were discussing.
And your assertion about Laws and theories is incorrect.
It may be useful, also, for you to read these two pages:
The difference between Law and Theory (in scientific discourse) is not what you believe it to be.
1. A law gives you a relationship with no mechanism. It almost always appears as a mathematical formula. You know how the pieces change with respect to another, but not why.
2. A theory gives you a mechanism. It tells you why. On rare occasions, it will not provide quantifiable predictions, in which case it is a qualitative theory.
> Laws are proved. Theories are unproven.
This is not what is used to identify something as either a law or a theory in scientific discourse.
First, "laws" may be disproven, or falsified: Newton's Law of Gravity could actually be considered disproven as it is not accurate at all scales, but it's accurate enough within its scope to continue using it. It's considered a law in the sense that it matches empirical, observed data (within certain bounds). See  for details on that. So that, right there, is a flaw in your understanding.
Second, laws do not attempt to offer an explanation of the phenomenon they describe, they offer predictive value like "a ball dropped from 50 meters will, at time t, have velocity ...". Again, Newton's Laws do not explain why gravity works, only offering a model to calculate the effect of it. This brings us to theories.
Theories are, again, falsifiable via empirical evidence (like laws), but they offer an attempt at explanation. A Theory of Gravity would try to explain why objects are attracted to each other and why the mass effects the amount of attraction. The theory can be shown as false, but like a law it can only be shown to match empirical data. This is not the same as proven.
TLDR: The distinguishing characteristic is that both attempt to predict, but theories attempt to explain. Both are falsifiable, and neither are considered proven only to match empirical data (possibly within some constraints).
“Hypothesis. Theory. Law. These scientific words get bandied about regularly, yet the general public usually gets their meaning wrong.”
And the idea is also applicable to trajectories rather than singular points: if you change your starting point by an epsilon, would the trajectory be vastly similar or a bit different? or very different? cue in Lyapunov fun and Lipschitz continuity as a metric and, to a lesser extent, conditions for chaotic trajectories to emerge.
In some systems you want to generate constant oscillations, so your system can be in stable perpetual dynamic equilibrium producing a nice sine wave.
Along these terms, I think equilibrium doesn't really make sense - it means stability in many definitions. From the top google result:
> Equilibrium is defined as a state of balance or a stable situation where opposing forces cancel each other out and where no changes are occurring. An example of equilibrium is in economics when supply and demand are equal. An example of equilibrium is when you are calm and steady.
Another take on this: (Stable vs unstable equilibrium)
sigh, someday we'll have auto-correct that just works. Doesn't even need to be AI, just use words in the current page. Heck just use contextual info such as the capitalization. Somebody, please?
In the list of anti-perfection patterns there's mechanical jitter.. a catastrophe avoiding relaxation.
If you make a production line that has perfect throughput with no buffers then you get fantastic efficiency and productivity right up until a single train hits a biter and is delayed by 30s.
Then you spend 3 hours trying to deal with every machine being out of sync with every other machine with constant start/stops :(
People often focus on one thing and overlook the sacrifice they're really making. Everything has a tradeoff.
Cache/reserve and efficency is itself complicated and situational if it helps or hurts efficiency. Overfitting it could make it fragile but it also depends upon relative costs for what a blind pursuit of efficiency goes for. If something is cheap enough like data storage there will be plentiful slack because slimming it down is irrelevant to efficiency - why bother going with custom 4KB chips when economy of scale means a 250 MB one is cheaper and better? It just isn't worth trying to cut corners there.
A laggard damped system would take longer to get into a "bad state" assuming the environment doesn't demand rapid changes as the baseline to survive. Bad state is relative as always - one can doom themselves both by leaping onto the "next big thing" which isn't and by sticking to the buggy whip and musket when others have cars and automatic rifles.
Topically, in the past week we've seen two giant companies, Adobe and Canon, lose unimaginable amounts of user data. If they had had backups, which are a form of redundancy, this would not have been a problem. But the backups were too expensive--too inefficient--and so now customer trust in their service is absolutely destroyed.
I dread the day when Google Maps, Traffic stats, Uber has actually delivered us the "perfect people transporter system" maximizing the heck out of existing infrastructure (cities, roads) and then the inevitable happens.
Systems become too big to fail.
I leave the eBay link because last I checked used copies on amazon were very pricey.
A system could be "more efficient at becoming stable," for example.
But if by "efficiency" we limit ourselves to mean "the time-cost of a set of actions," (as in the most efficient path is the one that takes the least time), we quickly encounter problems with maximizing usage of time and how that conflicts with unexpected work, which leads to the anti-stability you mentioned.
The way I think about it is that a 100% time-efficient process has zero time-flexibility. If you want to gain time-flexibility (e.g. the ability to pivot, or to work on different things too, or to introduce error bars to your calculations), you lose time-efficiency.
All of this efficiency increases financial stability. I suppose if we argue that I'm only referring to optimization and not efficiency, then perhaps it's not a great argument.
In this case, efficiency might be automating bill payments, but then you don't catch price changes, and depending on your other systems in place, you might miss overdrawing your account.
Buying in bulk is cool. If you buy a big package of paper towels, it's cheaper and you don't have to worry about running out. The fact that you have a cabinet full of paper towels isn't a big deal. But suppose you find a really great deal on a semi-trailer load of towels and stock up. Now you have your guest bedroom full of paper towels. The next week, your cousin Edgar's house burns down; you'd like to offer him and his wife a room to stay in temporarily, but you have all this paper in the way. You have lost some flexibility.
A bigger problem is, say, corporate supply chains. With just-in-time supply, you don't have to store inputs and can focus on producing outputs; it's very efficient. But then there's a pandemic or a fire in a factory somewhere, and the supply chain falls apart. Now your outputs are perhaps in greater demand, but you can't take advantage because you have no inputs. You're out of business for the duration. You can't flexibly respond.
The proper, though not easy, thing to do is have some primary sources of your materials and secondary sources to handle surges and supply issues with your primes. You may still have a production slowdown, but hopefully not a shutdown. And like with backup, if you don't use them you don't have them. You have to place orders with all sources, and use materials from all sources to ensure that they are in fact up to snuff. It'd suck to buy your RAM from Foo until a flood, and then find that BAR's RAM doesn't actually work (or doesn't work with your motherboards).
But, having a single source is more efficient than multiple sources---you can integrate tighter with their ordering system, packaging, product quirks. Until it stops working.
Having all of your sources in one country is more efficient. Until it doesn't work.
Managing all of your materials just-in-time is more efficient. Until a backhoe hits the gas main in the street outside and you can no longer get trucks into your factory and have to shut down the line for the duration.
A company with a weak (but not completely idiotic) supply chain will have a significant margin over a company paying extra for a strong supply chain. Until something goes wrong.
You do not have to live for ever, so surviving a extinction level ecvent is more than the "right" level.
Being so brittle that the failure of one delivery kills you is less than the "right" level.
How long do you want to live? And when you die, how is it going to happen?
It is pleasing to see firms go out of business in a orderly fashion, such that they have the resources to shut up the shop properly, and every one in the business gets "onto the life-rafts". But it is more common to see businesses go down in a screaming heap with huge debts and big messes for others to clean up.
When I was at business school I was taught that a firm should have the correct amount of debt such that it maximised the tax shield of interest payments and had no fat that a hostile acquirer could use to do a hostile takeover.
That was in 2007. I am quite sure the course teaches something different now!
"Sufficient" is dependent on a lot of factors, though. No one can tell you a definitive answer without knowing your system (your suppliers, your customers, your rates of production, your supply of capital to withstand a drop in production).
On the other hand, if you only buy your food day to day, that is certainly more like JIT logistics, prevents waste & storage space needs, etc., but it screws you if you can't leave your house and the stores get closed due to some... ahem... what might possibly happen that forces you to stay inside.
So it's always a matter of your frame of reference, I guess.
Depending on the cost of the insurance, that sounds to me like a drop in stability: you have infrequent periodic large payments to make instead of frequent smaller payments to make. If you had an unexpected expense arise near the time of the large insurance payment, your financial situation could get temporarily bad; if instead your insurance was small payments on a monthly basis, the unexpected expense would be easier to ride out.
[Note that I pay all of my large expenses in lump sums instead of in small trickles, but that's mostly psychology on my part not efficiency or optimization]
I would hope that the fragility of JIT supply chains was laid bare for everyone in the Covid crisis but I expect that lesson will soon be forgotten.
Add to that that if you optimize one from a set of orthogonal values, the other ones tend to decrease. And so you get to all the people claiming there's an intrinsic relation between them, on the face of a world of evidence.
On the other hand, past a point, efficiency conflicts with robustness. To maximize efficiency, you become tightly coupled to some resource and again have a hard time if that resource dries up.
You could drive 5mph on back roads with less risk of course.
there's plenty examples of this kind of thing in engineering design situations. it's cheaper (i.e. more efficient usage of capital, at least in the short run) to not allocate resources for backups or allocate extra capacity in systems that isn't used 95% of the time. it's much more expensive to dig two spatially separated trenches to lay independent paths of fibre optic cable to a given building, but if you cough up the money for that inefficient redundant connection, your internet will have decreased risk of interruption by rogue backhoes. it's cheaper to not hire enough staff and get individuals in a team to over-specialise in their own areas of knowledge rather than having enough spare capacity and knowledge-sharing to be able to cover if people get sick, go on holiday or quit.
I think this concept misses capacity. I my opinion, it is crucial that you always leave some over-capacity to have stability (lets say, you are running at most at 80% capacity). If you then increase your efficiency without sacrifying your buffer capacity, everything is fine. But as soon as your try to run at more than 80% capacity to be more efficient, this slightest problem could have devastating effects.
In general, learning how to do things better can produce efficiency and more stability too, if the way is better all around.
Sometimes you have to add a resistance (inefficiency!) in series with the output capacitor to achieve stability.
Yes, and those are the shitty things.
But when you want to improve them, make them non-shitty, you're facing a choice: stability, or high performance - pick one.
But efficiency itself is nether haste nor churn, in fact the opposite.
There is little worse than a very efficient government.
To me, this very conversation is the approach we should be taking to the major problems du jour on the planet (treating it as an incredibly complex system with an infinite number of interacting variables, many of which we do not even know exist). But it seems as if once a system reaches a certain level of complexity, we lose the ability to even realize that it is in fact a complex system, and insist upon discussing it only in simplistic terms. Or maybe it's the fact that we are embedded within the system that makes it impossible to see.
One makes a tradeoff by reducing overheads and buffers, and the other doesn't have any tradeoffs, it's just a better way of doing things based on novel techniques.
Perhaps also Chesterton's fence .
Maybe also the whole premature-optimization thing .
And of course the too-clever-by-half coyotes .
Really maybe it just comes down to "be wary of making changes that reduce resiliency."
I was hoping to come up with something cohesive with this comment, but really I guess I just agree with what you say.
And I think there are a bunch of people sort of circling around the same idea, which I don't think we've really quite landed on a precise definition of, just as you say.
Thank you for this link. I'm halfway through that article and will probably read every single one on that website.
He uses the analogy of 'too clever by half' to exemplify his idea that 'financial innovation is always wrong'.
Nothing could be further from the truth. Insurance products have changed the world just as much as any technical innovation.
Mortgaged backed securities are not a bad thing, far from it, they allow more efficient use of capital by having 'saving Germans and Japanese' invest their money where they otherwise would not be able to.
The problem in the 2008 crash was soft systematic corruption and un-ironically a lack of fragility (ie one bank goes down it takes the rest down like dominos) - not necessarily the securities themselves for which he didn't actually even provide any basis of his negative assertion.
Efficiency is usually how we gain productivity and it's borderline absurd to say there is inherently something wrong with it on the whole. Like anything 'it depends'.
If you can have a software algorithm outperform 100 analysts on weather predictions for your fleet of drivers ... that's probably efficient. But cutting down operating margins so that any bump in the economy will leave you flat is maybe 'over optimisation'.
I don't really have a horse in this race, but I think you're misreading the article.
He says: "Every truly disruptive discovery or innovation in history is the work of coyotes. It’s always the non-domesticated schemers who come up with the Idea That Changes Things. We all know the type. Many of the readers of this note ARE the type."
That's not a criticism, that's a point of praise.
He then follows it up immediately by saying: "Financial innovation is no exception. And this is Reason #1 why financial innovation ALWAYS ends in tears, because coyotes are too clever by half. They figure out a brilliant way to win at the mini-game that they’re immersed in, and they ignore the meta-game. Eventually the meta-game blows up on them, and they’re toast."
That isn't saying it's a bad thing, it's saying that the people who come up with the new ideas lose sight of the broader picture and get taken out by "the thieving raccoons" and the State.
He's saying "the coyotes" lose sight of the broader picture, just like in the famed XKCD  where the "too-clever-by-half" computer person encrypts all their data, and forgets that the thug who is going to come looking for it will just beat the piss out of them with a wrench until they turn over the key.
The core nugget of the article, in my opinion, is exactly the "meta-game is what always gets you" aspect.
It's the same thing that NN Taleb refers to as "2nd order effects."
Could be a Chesterton’s Fence scenario
Instead of bemoaning efficiency, it’d be interesting to reward/value redundancy and antifragility, at least at the system level.
I think this could mean trust busting, regulation, and general cultural shifts.
There was zero redundancy versus leaving the directory open so they could open the next file (or using the application's "Open File" dialog).
That is a perfect example of wasteful motion (in their case due to a poor mental model of how computers worked, as I learned through later discussions) that could be simplified significantly without loss of quality or redundancy in the system.
Contrast this with: The surgical office called me this morning and stated, "The surgery is for a ganglion cyst on your left wrist." Which I confirmed. When I go in on Tuesday for the surgery this will be repeated, and a mark will be made on the area to be cut open (though in this case it'd be really hard to screw up and open the right wrist, as there is no, quite visible, cyst there). That is useful redundancy of the sort you describe. Removing any step (the initial visit a week ago, the call today, the check when I arrive, the mark on the wrist) and you increase the risk of error.
Punch line - Sausages coming from a new modern factory didn't taste the same. The new, more efficient building removed a long transportation step where the partially finished sausages picked up flavors and scents coming from different parts of the factory. They had to create a new process to manually add those flavors that they were accidentally getting for free from the old factory layout.
It would be good to have proof of a Chesteron's Fence analysis to say "yes, we know that Y & Z purposes and have analized the cost/benefit to removing them and the populations/systems impacted" - would this be an impact analysis?
In other scenarios, the process fails if any of the steps fail. In that case, redundancy is less stable, and you can improve both stability and efficiency by eliminating unnecessary steps.
In either case, there may be other considerations involved as well (flexibility, visibility, recoverability...) but sometimes we just didn't see a better way to do something.
This type of example exists in all industries. For example, finding a new alloy that has strictly superior properties across all dimensions for a specific use case. Or upgrading mail delivery routes using better pathfinding algorithms. Etc.
Hopefully not nitpicking too much: it's a win without _many_ tradeoffs. E.g., in the usual places where you'd accidentally get an O(n^2) operation rather than O(n log(n)), the O(n^2) operation is constant-space. In a sufficiently anomalous computing environment with a low enough priority on fast results you might still consciously opt for the O(n^2) solution.
At least in the case of code, this isn't true. The variability comes in terms of change to the system, rather than the running of the system. i.e., if I simplify a process to be less modular and more monolithic, making it more efficient, that also makes it more purpose-built and less flexible. The "risk" increases of running up against a change that needs to be made but is intractably onerous. There's always a tradeoff.
> Fisher's Fundamental Theorem: The better adapted a system is to a particular environment, the less adaptable it is to new environments. -- Gerald Weinberg,"The Psychology of Computer Programming"
It's something everyone should consider in making critical design decisions. Your adaptable, modular system has some risks (particularly in terms of meeting performance targets, increased cost due to increased complexity), but the monolithic system has its own risks (less adaptable to changing requirements, potentially more fragile against attack or damage). Which you choose depends on many variables including your risk profile and anticipated need for change in the future.
If we consider any possible solution, we can obviously imagine adding a completely spurious detail.
A pure trade-off between efficiency and stability would imply that, were I already running the efficient version, we could buy stability by switching to the less efficient code.
* In game theory, "7 hospital beds" weakly dominates "8 hospital beds". But (x') strictly dominates (x, y, z). This is exactly what Pareto Optimality is about. Though perhaps a more colloquial term would be useful here.
Efficiency produces the same output with less input.
Productivity produces more output with the same input.
So efficiency is a measure of input to target output, and productivity is a measure of output to target input.
To make a process more efficient, you figure out how to get to some X output while using as little input as possible.
To make a process more productive, you figure out how given some Y input, you can maximize your output.
So if you fix some goal, say, "we want to be over capacity 1% of the time," then the most efficient way of doing that is probably to have the minimum number of beds that you need according to your predictions about utilization. But you can't really talk about efficiency when you're deciding what your goal is, e.g. whether you're okay being over capacity 10% of the time versus 1% of the time.
For example, how can I make all of today's deliveries with less delivery trucks?
While productivity requires a fixed input goal. Because you want to maximize output while sustaining your desired input goal.
For example, how can I deliver more products per day without increasing the size of my delivery truck fleet?
Often time, improving one can improve the other, but not always. For example, someone could ask, how can we grow profit? Okay, one way is to be more efficient, thus spend less money to make the same revenue. Alright, maybe we use cheaper materials, so now produce the same amount and sell the same, but our margin has increased and we make more revenue. Someone else could say, we need to be more productive. Okay, so you invest in better marketing, and scale production to meet increased demand. You are as efficient as before, but more productive.
So I feel like, reducing the process from X,Y,Z to only X is about productivity. You still have the same number of employees, but since they don't need to waste time doing Y and Z anymore, they can produce more output. That said, you could choose to apply the gains to efficiency as well, for example, hey, because I eliminated Y and Z, I can now cut my workforce in half and deliver the same output.
Having ways to avoid an unanticipated repetition of a process, which would result in bunging up the works for dependent parts of the system, can make the entire flow more efficient. See also 'drum buffer rope' from constraint theory.
For example, if your first statement was 'We only have seven beds because we tightened up our discharge workflow and that's all we need 99% of the time' and your second statement was 'We only have seven admins because we replaced steps X,Y,Z with just step X and that's all we need 99% of the time' they start to line up.
The other one like bad efficiency I would just call "cost cutting measures" not efficiency improvements.
With kaizen you try to accommodate to what you have. So if Bob is slow cost cutting measure would be to fire him. Efficiency improvement way would be observing Bob to see what can be changed so you can get more value without messing him up.
Consider the paradox of finding that a factory crew has no inputs—they are playing cards waiting for an order to come in—and yelling at them to go do other things around the shop like clean and assist other operations, rather than loafing. Or, for another solution to the problem, you might pre-order all the stuff and make sure that the team is always 100% loaded and never has the free capacity to play cards.
At first blush these improve superlativity, no? We are accomplishing everything that card-playing does but we are “faster, more accurate, and cheaper” if we are measuring, say, labor cost per part and the technician time averaged over the parts they worked on. Have we not just found a “novel technique” which is “just a better way of doing things?”
But staring at it for longer you may find yourself less sure. That’s what I mean by complex systems they morph into each other. There are more subtle tradeoffs here. For example when people feel free to loaf when they have no work, you can walk into the shop and ask who’s loafing and why and how you can improve their situation so that they again have proper work to do. There is an increase in latency when that shipment finally comes in and all the workers need to be summoned from across the floor to handle it again. There may be mental fatigue from having to context-switch too much or from having to constantly work on just one thing with no breaks. Or maybe the teams that need whatever they are producing cannot finish their work fast enough, so all of the inventory produced by this team slowly grows until it fills 50% of your factory floor, until you only have a certain amount of space because that’s all you need on 99% of each day.
The point is that the greedy algorithm may fail. In a linear circuit, you short out some resistor with some wire, you know that current is going to move faster afterwards. But in a nonlinear circuit, you no longer know this. In the absolute simplest case, the increase in current rapidly breaks a fuse and everything grinds to a halt. In more complicated cases you have a feedback loop and the increased voltage from the short-circuit feeds back to the earlier stages to throttle the current coming through.
Same with weight loss. People think that they will eat fewer calories and they will therefore lose such-and-so amount of weight. Well, probably. But this is a complex system we are talking about. One of the first things that happens when you start burning the fat is that your body burns your muscle too. This is the same reason that you can't burn fat on your stomach by doing crunches, your system is sending the call out to your entire body that it needs to digest surplus material. The loss in muscle mass appears to be the primary culprit which kicks down your basal metabolic rate and you hit what weight-loss folks call a “wall” where you are literally cold all the time and wearing sweaters and feeling too cranky to exercise and all that, feedback mechanisms which will mean that if you keep eating that restricted amount of calories you won’t be losing any more weight unless you can “break through” it by keeping warm through exercising and thereby increasing your muscle mass back up to where it needs to be and so forth. It’s just that it’s a complex system and the greedy algorithm does not always work for such systems.
In your latter example, it could very well be the case that steps Y and Z had purposes you didn't take into account that makes the new process less efficient in some cases with respect to the target metric.
Either way, overoptimization and focus on specific metrics to the exclusion of others is a real problem. Circumstances change over time and high levels of optimization make processes more brittle and likely to fail when circumstances change.
[Y]ou would never say of him that he "broke out a sweat": but everything was allotted its own time and thought, as by a man of leisure - his way was unhurried, organized, vigorous, consistent in all.
I feel like I spend a lot of time rushing from one thing to the next, constantly questioning whether I'm spending time wisely. And then I end up accomplishing less because I lack focus in one area. I've instead been trying to relax, slow down, and take tasks one at a time until completion. I'd also recommend Cal Newport's book, Deep Work, on this.
As an extreme example; watching Schumacher at his peak perform during a qualifying lap or during race, in treacherous rainy conditions, while everyone was absolutely struggling, and him out front, half a lap ahead of everyone was like watching poetry in motion. You could tell he was very relaxed just by the way his hands operated the steering wheel, hitting the apex every time in a single motion, no twitching or tossing around the car.
It seemed he just had more time, as in the time had just slowed down for him compared to everyone else.
Edit: Typo; damn you Mac OS auto-correct!
We see it in obviously exaggerated forms in film, like the Matrix, but that’s based on real shit. The best baseball hitters describe seeing the pitch the same way.
Aurelius' adopted father was a consul three times, which is certainly not a stress-free job! But he apparently was able to keep cool by the way he approached his work.
Actually we can't discuss efficiency without making it clear what parameter we are optimizing, at the cost of what (if any) other parameters.
However, usually, if we reduce the time something takes not by cleverly eliminating, rearranging or otherwise streamlining the steps, but rather by some brute force method that requires more resources (more people, more equpiment, more energy), it is hard to frame that as efficiency.
Neither is multitasking using three different devices at the same time efficient. Again in this example the author seems to be confusing rush with efficiency. I didn't make it past those first few incoherent lines, so I don't know whether this confusion persists into the rest of the article.
Calvin's Dad, sitting at his desk: "It used to be that if a client wanted something done in a week, it was considered a rush job and he'd be lucky to get it."
CD: "Now, with modems, faxes, and car phones, everyone wants everything instantly! Improved technology just increases expectations."
CD: "These machines don't make life easier - they make life more harassed."
Calvin, in the background: "Six minutes to microwave this?? Who's got that kind of time?!"
CD: "If we wanted more leisure, we'd invent machines that do things less efficiently."
Risk is made up of at least 2 or 3 components: what is the probability something will happen? And, if it does happen, what is the impact and how will you mitigate that impact?
For example, you may believe that a change to a website you are deploying has a low probability of taking the website offline. If it is taken offline, it may cost £X per hour in lost revenue, but you’ll leave the old version running on a standby server, so it only takes a few minutes to switch back. That’s a much more thorough understanding to 1 aspect of risk than “this rollout is low risk”. Once you have that understanding, it’s reasonable to discuss how to reduce the probability of an outage (better testing?), as well as how to reduce the impact (staged rollout?) or to speed up the fix if it were to happen (practise?).
In COVID terms, we should be discussing the impact of decisions in the light of future pandemics. Could we invest now in reusable PPE, so that next time we don’t have a global rush on the disposable stuff? Do we need to educate the public more readily about reducing disease transmission to reduce the likelihood of a pandemic in the first place? I’m not a doctor, so I have no idea on the specifics, but the likelihood of any given pandemic will always be low, so what is the impact of the decision if there is one and does that impact need to be mitigated? (even if it is less efficient to do so...)
Efficiency is a dimensionless ratio of energy in to energy out. Economics, as formulated, has little to say about efficiency and lots to say about preference relations on utility functions (the texts do tend to hand-wavingly waffle about how markets attain "efficiency" through hypothetically rational actors maximizing utility functions etc., if you want a giggle check out the "fundamental theorems of welfare").
I actually read the fine article, and didn't give up after the first couple of paragraphs.
And in paragraph the third is introduced friction. I dunno, maybe it's because I actually study science, but friction is a well defined thing and the coefficient of same is another _dimensionless_ variable. It seems every time economists want to incorporate a notion from science proper they go for the dimensionless stuff because that way they don't have to go through the whole tiresome rigmarole of ... dimensional analysis. It makes me feel like Tantacrul criticising UI's (check him out on youtube, he's both funny and informative).
Anyway, efficiency is not dangerous, efficiency will actually allow the survivors of the Anthropocene Disaster make it through the coming disaster. Slowing down is not a bad notion because it means most humans can spend more time thinking (quite efficient actually) and less time haring around the planet distracting themselves from the vapidity of their vanity. However life is not better because it is slower, it's better because humans appreciate what an extraordinary (literally and figuratively) opportunity it is to be alive.
As insurance policy against disaster stop listening to economists because they observably don't have a clue (they can't make accurate and meaningful predictions), instead study science, especially physics, because these meanings are measured against reality and the resulting predictions are highly reliable (TANSTAAFL < 2TD).
Is this a US thing? I've lived in three different European countries and nobody thinks this way. Efficiency and productivity are things I mostly just read about on HN.
I've worked for companies that saw overburdening people with responsibilities as "efficient". On paper it was.
They would rather bootstrap and grow with stability then pull a softbank and use/give massive capital injections in the hopes of getting market efficient domination.
The fact Palace Economies all died out doesn't mean a barter economy never existed, just that it wasn't stable enough to leave any isolated "time capsule cultures". Granted palace economy commerce isn't on a personal level unless you count the ruler who allocates everything.
Most of what the article talks about is making large systems more efficient for the benefit of the system at large. I agree, this makes individual components of the system more stressed and prone to catastrophic failure at any one point in the system. People within that kind of efficient system are pushed to their limits and breaking points.
However, making individual processes efficient i've found reduces stress on the individuals involved in that process and allows for that slowing down time.
The examples I can think of to back my points up
Previously I worked at a job where the actual process was incredibly inefficient. I ended up working long hours and twice as hard as I needed to. By the time I left that job, I'd increased the efficiency of the process to the point where I was working reasonable hours, had some good downtime to relax or take care of other things I'd had to neglect before.
The overall system at the place though was fairly inefficient, which was a good thing. It meant we were usually a little ahead and could account for things that went wrong.
Another recent example at my current job, we were working with a person who tried to overhaul our inefficient, yet working system. Our processes were efficient enough that we always got what we needed to do done. The person we were working with tried to over engineer an 'efficient' schedule and system for us that in the end caused far too much friction and they ended up losing money and the business relationship between the company I work for and them ended.
The first word of the essay is "we," and it goes downhill from there.
>Seen in this light, at least some inefficiency is like an insurance policy. Think about your own situation. Every year that you don’t get into a car accident and your house doesn’t burn down and you stay healthy, you could think to yourself that you have ‘wasted’ your money on various pointless insurance products, and that you’d be financially better off without all those insurance premiums to pay.
This is a faulty claim. Traditional economic efficiency would say we _should_ stockpile medical supplies if it were more efficient for the markets in the long-term, which it would have been. The issue here is that _governments_ and experts didn't work together effectively (despite experts regularly noting the possibility of a pandemic, for example, Gates et al.) and the fact that our government generally isn't Keynesian. A quote (source: IMF https://www.imf.org/external/pubs/ft/fandd/2014/09/basics.ht...):
>Keynes argued that governments should solve problems in the short run rather than wait for market forces to fix things over the long run, because, as he wrote, “In the long run, we are all dead.”
As it relates to Global Warming, for example, we have various options. We could solve it by being "less efficient" (extracting less from the earth — and in fact, a Keynesian approach to that would be taxation to slow growth in harmful industries) but zoomed out, if we deal with global warming, we are more efficient over the long run. Moreover, we want to be _efficient_ in our development of green technologies.
The trouble isn't so much efficiency, it's zeroing in on making particular processes efficient to the detriment of the whole across time and in the present moment.
Pace and efficiency are definitely linked, but I don't necessarily know that I think faster = more efficient. And there are folks mentioning stability, also definitely linked here, but i think max efficiency would lean more into stability than pace.
Linking pace with efficiency also seems to create the idea that "faster will be the winner" which, you know, that's a whole thing, introducing competition in an arguably healthy way.
I generally tend to find that emotion will usually come before logic. Building something that could be considered efficient, but a feeling says "This could be more efficient" and then logic jumps in and does the work.
Just some stream of consciousness writing here, food for thought maybe, or extremely poor quality ideas!
The other problem with prizing efficiency is that we often optimize for efficiency under an incorrect model of the situation (model-reality-mismatch) — underweighting the likelihood of upsetting possibilities. That’s essentially what the idea of “black swan”/“fat tails” is about. It’s not really about statistics, unless you’re using a flawed and over-simplified statistical model to ground your metric of efficiency.
IMHO the same problem underlies the approach/framework of behavioral economics. In many situations, observed human behavior might be “irrational” only because your model of reality is naively simplistic. It shouldn’t be surprising that a satisficing approach works better in reality than an optimizing approach; if it does sound surprising, consider that your intuitions might be biased by an incorrect model of reality!
(Portfolio theory is why I am in favour of social safety nets. A worker wishes a relatively safe risk/return profile will voluntarily choose a much higher risk job when they can combine it with a low-risk backstop.)
Then I had a medical condition that required me to slow down, changing my lifestyle completely.
Only in retrospect do realize my situation was extremely fragile.
And the more machines do laundry, the more effective and creative people can choose to be. Or they can choose to watch TV.
That's a myth propagated by people with a stake in the pie. Nature shows us that you only need to be hyper-efficient and in a constant arms race if you're competing for the same resources.
Otherwise, you can adopt an opportunistic survival strategy that trades efficiency for variety and a diversity of useable resources.
Of course, efficiency is the only option when there are companies trying to grab all the resources available, as in our current business setting. But that's definitely not a healthy environment.
I have a teenage stepdaughter doing online schooling. I had been working remote for years now. It took a long time to work out how things work with my wife -- she is a stay-at-home mom. Work and family life intrude on each other, and even more so this year with the pandemic.
I used to burn out big time, and then, I burn out in smaller doses. In the last burnout cycle, my wife miscarried, and that spent both my wife spinning. So I am not entire sure at this point how much of this is like when I functioned at peak.
The things that I have been doing:
- I try to have a few tasks I think I can accomplish and try to do them. But because the family life can be so disruptive, I've learned that I'm not really going to be as productive as I am at the peak, and just be ok with it.
- I try not to be a blocker for either my teammates or for my family.
- I have a garden. I enjoy it. I mainly do work with it in the morning and evening. Sometimes I am exhausted (especially in Phoenix summer hell season). In general, though, it recharging.
- I practice neigong, and I finally got to the point where I can reliably cycle something (which I will not get into technical details about unless you're also a practitioner. It is a rabbit hole). But suffice it to say, it rebalances the vital energy being distributed among my physical, emotional, and mental states. Part of the burnout was exhausting everything mentally, repeatedly, until there is just nothing left.
- When I push through something, it is for small things. Those small things might chain together. Past a certain point, it is better to go for a walk.
- I can tell when my brain is just exhausted. It is better to take a nap. I'll warn my wife that I'm not really present during grocery shopping. She doesn't always like that (it is one of the times we go out to do something together that is not in the house). I might take MCT with some mixed nuts.
- I don't use caffeine -- no coffee or tea. The closest I get is with roiboos, and even its 1mg caffeine can affect me. Fortunately, the caffiene content of chocolate does not affect me as much.
The part I am working out is how to live in a way that follows the permaculture ethics -- care of earth, care of people, and fair share. It is the last of these that has made me realize that the ambition of unbounded growth, whether for society, or for myself, is simply impractical. It may be strange to say it on a forum that was created for people who were or are interested in doing a Ycombinator startup and getting rich from winning the startup lottery ... but I've come to realize that it is not how I want to live or relate to the world.
I used to practice minimalism ... but now I realize that is just a stepping stone. It's investing in regenerative and resilient systems. The garden is part of my long-term effort to create a perennial food forest on my property. There is a lot of tech and "shiny" that I realize I don't really need to get.
And while I know that recently, I would keep comparing other people's cars as status / wealth symbols, it is ultimately meaningless. And fortunately, I know what I need to do to get my mind to cease doing that.
This is a very unusual way for me to approach this subject. I usually start out with a radical position: inequality and the wealth gap is _intrinsic_ to modern civilizations, and it rests upon the notion that wealth is something to be extracted from the earth, and access controlled. "Efficiency" is how you maximize profit, as if that is the only way to optimize things. Therefore, there will never be any system of economics and free market (or command economy) that will ever take the well-being of the earth and the people into account.
From this perspective, I think it is insanity. Why would anyone want to participate in a system where there will be guaranteed losers? (Because the few exceptions give the false hope that you might be the exception).
It therefore follows that, if I want to participate in a different kind of a "game", then I will have to live my life by those other principles. And so, I'm trying that with my current work. To give an receive my fair share. To reinvest capital gained from extractive wealth and convert it to regenerative wealth. To not tie my personal sense of self-worth into status or wealth symbols.
There was a major one in the 20th century. It did not go well.
Making something inefficient doesn't magically increase preparedness.
Many of the arguments are non sequiturs. The last example is particularly bad. On an icy road, it doesn't matter if a car gets 100 mpg or 10 mpg - speed and traction are far more important factors.
Take the aluminum can. Beverage companies heavily innovated, reduced aluminum use, saved resources, emissions, and consumers get a win with lighter cans and less environmental pollution and carbon emission.
This is what matters right now.
when making decisions:
"we should be asking which option will give us good-enough results under the widest range of future states of the world"
doesn't that lead you right back to the struggle for the most efficient way to determine the optimal "good-enough" answer?
Also, the article talks about efficiency but efficiency of what exactly? What if you're very efficient at a maintaining a relaxed and happy lifestyle?
* "Upgrade Attack" is more efficient. It allows you to beat the game faster. But since your HP is low, you can't afford to make mistakes (get hit).
* "Upgrade HP" is more robust. It increases your damage-buffer, which allows you to absorb more mistakes without dying. But if you make zero mistakes, you won't beat the game as quickly as if you'd upgraded Attack.
IRL the dichotomy is often "income vs wealth", "velocity vs displacement", "throughput vs latency", "strength vs endurance", etc.
My problem solving algorithm is to look up to 7 steps ahead, along up to 7 different branches in the tree of possibilities (I'd like to examine more choices but my brain can't hold them all). In practice this is always at least 2 and 2, so this task or its alternative, and then looking at the next step along each branch. Then I prune the tree, selecting for the steps that work across as many branches as possible, without rewrites or duplication. I'm often working on a common dependency several steps out that other people aren't aware of yet. This could be thought of as a breadth-first search or parallel search.
Unfortunately on an individual level, the tech industry seems to be going the opposite direction, exploring deeply down very long and linear branches of the tree. Coders are expected to implement an idea and be ready to scrap it without hesitation, iterating over and over again until finished, rather than simulating the outcome in their minds. More of a depth-first, serial search.
I think the split is due to programming moving from an individual endeavor towards teams. We can't see inside each other's heads, so we plot the course and divvy up steps among the members. The cost of this is that serial searches will almost always turn up suboptimal results, because the resources to do broader searches get withheld so we miss elegant solutions and even perceive them as more costly.
To answer your question, the serial approach works best for individual tasks, but misses the big-picture view that might have let us see that the task didn't need to be done in the first place. So my vote would be to see more quiet reflection and less spinning and churning. But I'm mostly outvoted, so it probably comes down to personality, with junior developers being more sought-after than senior developers like me who might be perceived as reserved/overcautious/conservative.
Compare Junk food, you get full quickly, nutrients in junk food is usually bad. You can instead choose to have the experience of slowly cooking a nutritious meal. Eating slowly cooked food with your family and friends is often better. Usually stress comes with running around too fast so you tend to consume not healthy food and you then after a while gain weight. I am not saying consuming junk food, or running fast is bad every now and then but doing so all the time you might miss slower better experiences...
Same goes for buying local vs buying cheap stuff which does not last long. Often buying local is better for the environment.
In my book, you are efficient if you did 8 hours worth of work in 4, did it right on the first try, went home early, and had enough mental capacity left for hobbies and family.
60 hour weeks are not efficient. Busywork is not efficient. So called „productivity” software is more than often not.
"All things being equal" efficiency is always good. (e.g. if an identical task can be done for 1/2 the energy - good)
"All things being equal" slowing down does not necessarily make life better. (Cleaning up after an oil spill is something we probably want fast.)
All things are not equal, and you have to make a series of tradeoffs.
How much risk are you willing to take?
How much enjoyment do you get out of a task?
The author is of course correct that optimizing efficiency at the sake of all else is typically a problem. But to misrepresent efficiency (the key to our modern age) is not quite fair to the term.
But if you really know every and all components and criteria that may be applied to them, you might improve efficience. Good luck with that.
(Not the chat app)
„in the name of taking it slow“
There’s a big difference between being productive and being busy.
The premise is total bullshit. No one thinks multitasking is good.
Barry Schwartz consistently doesn't get it, but he's good at presenting "Unpopular opinion's" style articles.
People say multi-tasking is great! But it's not! Upvote if you agree.
Efficiency is why your baby doesn't get stuck in the womb killing two people.
Bot sniping on GrubHub is scary, this is an efficiency that actually worries me. But it's not within this articles scope.
If you have extraordinary "good luck", you may come out ahead in the lottery. And if you have extraordinary "bad luck" you may come out ahead in insurance.
Insurance of course has a more practical value, and a much better return on investment (even if it averages negative returns)
Only if you put zero value on the peace of mind that comes with having insurance against risks that, if they happened, would bankrupt you. But if you put zero value on that, you wouldn't buy insurance.
Only a fraction of the consumers come out ahead if we only consider monetary value. But the whole reason insurance and the lottery exist in the first place is that there are other kinds of value besides monetary value. In the case of insurance, it's peace of mind. In the case of the lottery, it's whatever entertainment value comes from being able to visualize yourself winning, even if your chances of actually doing so are tiny.
Only if you put zero value into seeing your savings go up with the amount you saved on insurance.
the decision of purchasing insurance or not depends more on how many resources or alternatives you have, and if you can easily smooth over an uninsured bad event so that it doesn't wipe you out. for individuals with limited resources who will be wiped out if uninsured, it is very rational to buy insurance, including paying a profit margin to the insurer on top.
another way of thinking about insurance is a way for groups of people to pool and share risk -- if they don't get hit by correlated bad luck then they all get to smooth out their bad outcomes.
Usually, individuals with limited resources can't really be wiped out: there is not much to wipe out.
Here is my perspective:
I consider myself typical individual with limited resources: I'm mostly live on the my salary (Software dev), I don't really have major assets (20k car bought for cash, renting a house), have some savings (~30k cash). And I only have minimal car liability insurance, as required by law.
So, what kind of "wipe out" event can happen to me / my family? And what kind of insurance do I need to get for that?
Let's say I have auto accident and I'm liable for million dollars. The likelihood of such event is low. Such event probably would result in bankruptcy, but it's not much worse, then constantly paying extra for insurance. So, there is "no wiping out" in this seemingly catastrophic event.
So, what I'm saying is "individuals with limited resources" often doesn't have much to wipe out, so they don't have to worry about insurance.
I find that ironic, because we actually have medical insurance and unemployment insurance here in USA, but we often see "individuals with limited resources" wiped out for reasons you mentioned.
Consider the same accident and you fracture your spine, leaving you paralyzed neck down. Insurance (at least in the US) affords you access to medical care, and access to follow up therapy and other resources to (hopefully) recover completely.
If you have limited resources and no insurance, the consequences are a lot worse than bankruptcy.
If you're interested in equality and justice, you should be in favor of insurance (and insurance-like government schemes like social security) and against lotteries.
On the other hand, the good thing about lotteries is that they can be used to raise money for a good cause with entirely voluntary contributions, and even for most people who don't win, it can still be considered pretty cheap entertainment.
I don't (generally) play the lottery, and I don't (generally) participate in involuntary insurance schemes.
I don't buy pet insurance, I don't buy extended warranties; I generally try to avoid paying a third party's payroll, G&A, real estate, taxes, and profits, all on top of the actual expenses I incur that they (hopefully) pay for, after much paperwork.
That said, I sure don't carry the bare minimums in auto or health or homeowners insurance because I cannot afford a multi-hundred-thousand-dollar unexpected expense.
Right, which demonstrates that the expected value of insurance can be positive in terms of utility even if it's negative in dollars. For most individuals, a $100k loss is more than 100 times as bad as a $1020 premium. Meanwhile the insurance provider can absorb individual losses and still come out ahead overall.
If the loss from not having insurance is bearable, you're better off not buying the insurance. If it's not, it's worth the money even if you'd lose out on average. That's why phone or car insurance (collision and comprehensive) is a waste of money for the average person, but health or homeowners insurance isn't.
Oh, yeah, and then there's legal requirements that you have car insurance. So no, I don't buy the idea that car insurance is a waste of money.
Phone insurance? There I agree with you.
(If there's an outstanding loan on the car, the lender requires collision and comprehensive to cover their asset. Something to consider when financing a car - it's not just the car payment, but also the additional insurance you have to carry)
Used cars usually are not "tens of thousands of dollars". If one can't afford even a few thousand dollars in a pinch to buy a used car to replace one's current ride, then one probably needs collision and comprehensive too.
Why it is "unbearable loss"?
There is saying about that:
If you owe the bank $100 that's your problem. If you owe the bank $100 million, that's the bank's problem.
That may seems unethical, but if you think about it - it's reach people, who lives in our society, who needs to be careful to take care of their risks.
Of course, if you have assets to lose - you need to protect them (by buying insurance), but if you are "individual with limited resources" - it's not much different with / without insurance.
What I often hear is that poor people (with limited resources) needs to buy extra insurance to protect reach people from their risks. I disagree.
You have a good point though that if you have essentially no assets to go after liability insurance isn't worth much to you. This is why it is mandated for auto insurance (and why many places have underinsured motorist insurance, also)
It is akin to loans essentially - the one taking them may be giving profits to others but that doesn't mean it isn't cost effective.
But for the few who don't lose, it's worth it. That applies to both insurance and the lottery.
The difference, as has already been pointed out, is that insurance prevents a bad scenario, whereas the lottery sometimes results in a good scenario.