
Getting Ahead by Being Inefficient - yarapavan
https://fs.blog/2019/01/getting-ahead-inefficient/
======
wenc
Most of the commenters here get it.

Efficiency is an outcome of optimization, and optimization is a form of
specialization with respect to the environment/assumptions. With any kind of
specialization there's a trade off. If the environment or set of assumption
changes, one can be worse off than if one did not optimize at all.

Also, optimization at the wrong level of detail/abstraction can be costly. One
example is in auto manufacturing.

U.S. car manufacturers have traditionally tended to spec tight tolerances at
the component level, with the assumption that everything will fit when
assembled. This can be very expensive to get right, and the assumption of
perfect final fit is not always borne out.

Japanese auto manufacturers however, despite their reputation for
perfectionism, have tended to be looser with component level tolerances, but
paid more attention to assembly tolerances (functional build [1]). They
understood what tolerances needed to be tight and those that mattered less in
the final build. They embraced natural imperfections, and made sure the rest
of the system accepted the lower part tolerances. It turns out this led to
higher overall quality at lower cost. (Detroit has now embraced functional
build)

(The Japanese approach is analogous to focusing on integration testing as
opposed to exhaustive unit testing)

[1] Functional Build [https://www.adandp.media/articles/building-better-
vehicles-v...](https://www.adandp.media/articles/building-better-vehicles-via-
functional-build)

~~~
hathawsh
Extending your example: I've heard from mechanics that looser tolerances also
lead to cars that age much better as the components change in shape due to
wear and corrosion. Expensive cars have expensively machined parts to avoid
the need for gaskets, while ordinary cars use gaskets to avoid the need for
expensively machined parts. This trade-off is counter-intuitive for most car
buyers.

~~~
castle-bravo
I too watch Scotty Kilmer.

[https://m.youtube.com/user/scottykilmer](https://m.youtube.com/user/scottykilmer)

~~~
lsc
That guy is way more entertaining than he has any right to be. His voice, the
way he waves his hands around when he's talking; his love for ancient Toyotas;
He just has a great personality for that kind of thing.

------
kareemm
Tom Demarco wrote Slack: Getting Past Burnout, Busywork, and the Myth of Total
Efficiency[1] back in 2002. It talks about how keeping slack in human systems
makes them more resilient and humane. I first heard about it from Joel
Spolsky, who started Trello and Stack Overflow[2].

Slack is a great read for those responsible for managing software teams and
illustrates that there are no new ideas under the sun; there are just
repackaged ones.

[1]-[https://www.amazon.com/Slack-Getting-Burnout-Busywork-
Effici...](https://www.amazon.com/Slack-Getting-Burnout-Busywork-
Efficiency/dp/0767907698)

[2]-[https://www.joelonsoftware.com/2005/11/22/reading-list-
fog-c...](https://www.joelonsoftware.com/2005/11/22/reading-list-fog-creek-
software-management-training-program/)

~~~
etaerc
Oh boy, the rabbit hole. Check out this:
[https://en.wikipedia.org/wiki/Slackware](https://en.wikipedia.org/wiki/Slackware)

Yep, a linux based on the idea of Slack, developed '93.

And just as there's the Spaghetti Monster, there's also even a religion around
Slack:
[https://en.wikipedia.org/wiki/Church_of_the_SubGenius](https://en.wikipedia.org/wiki/Church_of_the_SubGenius)

developed '79, short quote from wikipedia: "the group holds that the quality
of "Slack" is of utmost importance—it is never clearly defined"

~~~
yellowapple
I've been using Slackware as my primary OS for a few years now (even my work
computer runs it). In retrospect, Slackware fits the article's mentality
remarkably well; instead of the approach taken by most Linux distros (where
everything is neatly and tightly integrated with a dependency-resolving
package manager and dependency-resolving init system and all that jazz), I
instead work with a system that sure, maybe some of the pieces don't fit
together perfectly, but they're readily adaptable to all sorts of different
situations. It's a less fragile system specifically because it's built around
accepting the components for what they are instead of trying to patch them to
"perfection". And of course, the conservative component choices certainly
help, too.

------
bluGill
Efficiency is a sub-goal of getting things done. IF you cannot get the job
done you are out of the running no matter how efficient you are at your
failure.

Henry Ford got more and more efficient at making the model T. At first his
customers appreciated the lower prices he was able to deliver for them.
However as time went on his competitors who were not as efficient at building
any one car were able to build a new car with features like electric start
that were worth paying extra for. The model T could never get that as the
assembly line was too efficient to allow the extra steps to add new parts, the
jigs had been optimized to the point where they couldn't be changed without a
large set of other changes that his line's couldn't handle.

~~~
zaphirplane
Is that a fact or an analysis? Why is it hard to add a step to the line but
easier to train _all_ the car assemblers to install/assemble the electric
starter That’s the whole assembly line innovation that is going strong till
today

~~~
bluGill
Various books report that, but I don't know how true it is (given the politics
in Ford it is probably impossible to say).

Ford optimized his assembly line. To add a step you need to physically move
all stations on the assembly line - these were bolted (or even welded) in
position. Plus you would have to move all the sublines feeding the line after
that step over one as well. Before you can start that you need to expand the
building because everything was designed to fit exactly what the model T
needed. Sure you could add a step, but only at great expense (and you would
have to deal with the politics, CEO Henry Ford was against changes to his car)

Today all manufactures know their assembly lines need to last longer than the
cars. They design slack in from the beginning so that an additional station
can be added someplace. They also make sure that you reconfigure the
operations easily. That isn't to say an assembly line will produce anything,
just that the line will be built with enough room that minor changes can be
made from year to year. Every few years they will tear down assembly line and
rebuild it over a month to make a whole new type of vehicle.

Note that Honda does one line for everything, while Ford has assembly lines
for only the F150. There are trade offs to flexibility as well. Honda cannot
make large trucks because they wouldn't fit in their stations. Honda also is
limited in how many models they make because eventually they run out of space
for their jigs at each station. By contrast Ford doesn't have to pay for space
to store jigs that aren't being used at the moment and all the complexity to
get the right one in place as needed. These are complex trade offs and both
companies have made their bets. Both companies have been successful.

------
ergothus
I remember in the late 90s when the place I worked at was all about JIT. Not
compiling, but physical stock - having the items they needed just as they were
needed. This was the Physical Plant for a large university campus, so
considerable money was involved in storing and shipping goods.

The execs were delighted with the idea of saving money.

The grizzled workers just raised an eyebrow, made a few passive-aggressive
comments about how, in their day, having the last box of something used
because it was needed with no backup was considered a disaster waiting to
happen, then shrugged and did as they were asked.

I left before I saw how it played out, but I suspect that there was some good
trimming of backlogs and storage of materials that really could afford to
wait, but far too many cases where JIT wasn't IT enough. It's like...creating
a breeding ground for Black Swans.

The article made me think of that - high efficiency all the time means little-
to-no-buffer for unusual needs.

~~~
matchagaucho
Shouldn't an _efficient_ JIT process define an ideal inventory quantity level
that triggers restocking?

"Qty=1" is definitely inefficient.

~~~
FakeComments
My local grocery store switched to JIT stocking of shelves: they’re now
perennially out of 1 in 10 basics, and I just shop online. Amazon Fresh can
routinely be out of things too, but at least they don’t waste my time doing
it.

The only value in a grocery store was that it was a warehouse of food stocks
that smoothed out the irregularities suppliers face. If I have to deal with
supplier problems because they JIT stock, I may as well just get deliveries
directly from suppliers.

You can’t “JIT” when your entire value is supply smoothing, because you’re
increasing the failure modes of your core business for tangential benefits.

People keep showing me math that says it works, and places keep trying it, but
I’ve yet to see it do anything but significantly damage their core business
for questionable benefit.

~~~
aikinai
It can be done well as seen with Japanese convenience stores. They keep no
stock but never seem to be out of anything.

I’m not an expert but I think it’s a combination of insanely good modeling for
each specific store and a super efficient and fast restocking pipeline.

~~~
pfranz
I can't seem to find a reference, but I remember hearing years ago someone
trying to adopt Japanese convenience store stock strategy. Apparently, the
small stores have a good relationship with their peers and exchange goods as
needed. The distributed nature compensates for deep stock.

Similarly, when I worked for Pizza Hut years ago about once a month we'd run
low or out of something and would phone up 1 or 2 nearby stores and send a
driver out to make the swap.

------
lazyjones
Inefficiency isn’t the same as adaptability. It has no intrinsic value.
Ineffective people aren’t automatically great generalists and ineffective
companies aren‘t better suited to adapt to new market trends.

This is a bad case of one-dimensional thinking.

All the anecdotes in this thread are about optimizing for a particular outcome
and failing to anticipate variations there. Even the human body stores fat not
because it‘s an inefficient machine, but because it has adapted to (sometimes
rare) situations of low food availability.

So, let‘s not celebrate slacking by claiming it to have some intrinsic value.

~~~
marcus_holmes
OK, no they're not the same things, but there is a trade-off there.

Large companies in stable markets reward efficiency. Staff KPI's are all about
reducing costs, because that's where the profit growth is (because the market
is stable and so is revenue).

But efficiency is achieved by streamlining processes. That streamlining is
almost always at the expense of adaptability. The processes become optimised,
but brittle and resistant to change. Which is obvious if you think about it -
the process becomes designed to do one thing incredibly well, and the staff
become adapted to that one process. Trying to change the process (or staff)
makes it more expensive, by definition: because if it made it cheaper it would
be an optimisation and would have happened already. "cutting out slack" does
equate to making the process less adaptable.

Then something changes and it's hard, if not impossible, to get the process to
change and the people to think differently. Managers whose bonuses are tied to
their KPIs are reluctant to make necessary but costly changes to their
department if those KPIs are all linked to efficiency. This is why large
organisations are doing their innovation thinking in smaller, separate
"skunkworks" or "labs" units.

Also, why large companies are incredibly efficient at producing profits from a
stable market, but get out-competed instantly by smaller, more adaptable, less
efficient startups.

~~~
lazyjones
> _Also, why large companies are incredibly efficient at producing profits
> from a stable market, but get out-competed instantly by smaller, more
> adaptable, less efficient startups._

Because if they‘re worth their money, they can afford to. Large corporation
routinely buy small competitors with profits made from their streamlined
operation.

~~~
marcus_holmes
true. And then immediately destroy them by trying to make them efficient ;)

------
jiveturkey
The example given in TFA is poor, as is (IMHO) the entire thesis. The actors
change their behavior on an evolutionary timescale, whereas the environment
sees periodic rapid events ("punctuated equilibrium"); to which any
individual, specialized actor cannot react quickly enough.

Whereas we, as SWEs or ops or devops or business managers, can react to a
changing environment. When you are in a highly competitive environment, it
pays to be more efficient. Unlike the bird, however, whose beak (eg) may be
specialized to reach a specific berry on a specific bush, and is immutable, we
can change our beak as the needs demand.

The article isn't even very good for specialized industry, say making metal
coils. Your machinery has to be specialized for it. You don't have a machine
that can make both coils and tubes. Because you're able to make very, very
specialized coil winding machines, you corner the market on coils. Then the
coil market collapses (thanks for nothing, disruptors!). OK, your business
fails. SO WHAT! You move on to make specialized tube flatteners or some such.

A better thesis would have been along the lines of not being static.
Specialization is good.

------
andmarios
I would take a different approach to reach a similar conclusion.

Assign a task to a junior dev.

\- Efficient: copies blindly a solution from stackoverflow in 30 minutes

\- Inefficient: copies the same solution but tries to understand it before
committing it, taking 90 minutes.

The inefficient lad is on his path to become a senior engineer. :)

~~~
avinium
There's always a trade-off between exploration/exploitation - in any
optimization, we're incapable of enumerating every path to our intended goal
or the associated cost. We need some time to experiment, fail and learn - and
ultimately discover a more optimal pathway.

I guess that's the motivation behind Google's 20% time (if that was/is ever a
thing).

I'd also be interested to know how Amazon approaches the problem. From the
outside, they look like a much more top-down organization than Google - keen
to hear if/how that gels with "encouraging experimentation and failure".

------
moh_maya
So back when I was much much younger & brasher, my email signature was
"Laziness is an Optimization Protocol". I used that email for applying to my
Master's programs, including some of my country's most prestigious (and ergo
competitive) places.

And I think one reason I got into one of them is because my future advisor saw
this, chuckled, but then proceeded to have a conversation with me about the
importance of being "smart" about how we approach questions, and being
efficient about resource use (including time)..

I was being cheeky; but he saw that I was kinda / sorta aware of a deeper
idea, and helped me develop & identify it. It has stayed with me since.

------
skybrian
There is a similar argument about how you don't want your systems running at
capacity because there is no headroom for an emergency.

~~~
lazyjones
That‘s a dogmatic approach. A practical one would be to estimate the frequency
and duration of emergencies based on experience and the cost of being out of
order and compare this with the cost of running below capacity all the time.

~~~
skybrian
Sure, or you could run low-priority tasks that can be dropped when it becomes
necessary. This increases utilization without increasing risk.

An example is using water in "wasteful" ways during wet years in order to make
sure there is something easy to cut back on during a drought year.

Another example is using flood-prone land for recreation rather than building
housing there.

------
distant_hat
In the machine learning world, this is the equivalent of overfitting to the
training dataset. You can have a model overoptimized to the data used for
training and it craters when used in production because the production data
has maybe drifted over time, or the training set was not representative for a
range of reasons. This is why early stopping is often a good idea when
training models, rather than getting the most optimal model for the data being
trained on.

------
ChucklesNorris
I believe I've got the gist of it: Keep your options open, and don't put all
your eggs in one basket, because s __* happens.

------
andyonthewings
Does it make sense to apply the idea from the article to the native vs webview
apps competition?

Writing an app in native (C/C++, or Java for Android, Swift for iOS) will give
you the best efficiency, but the app will not adapt to platform changes so
easily compared to using webview.

It is also interesting to think about whether VMs or transpiler techs (e.g.
web assembly, Haxe, GraalVM) will give us the best of both worlds.

------
twoquestions
This only works if you're strong enough to be able to endure the increased
costs. If you're in an environment where suboptimal performance means
elimination, deliberate inefficiency like this can break you.

Can't say I know what to do if you're in a situation like that though.

------
hnzix
_“If we all reacted the same way, we 'd be predictable, and there's always
more than one way to view a situation... It's simple: overspecialize, and you
breed in weakness. It's slow death.”_

------
PavlikPaja
It's interesting, since one would assume that most people here would be at
least to some degree aware of Information theory and the necessity of
redundancy.

------
BIDMAL
Sure, guess trying to be the best programmer is bad coz in case of a nuclear
war i won't be as good at digging as i could.. [/s]

------
sidcool
This is a good article in context of what I have been going through over the
past few months. The lure of perfection is too great.

------
lolcat5e
We could soon see the impact of a lot of efficient JIT supply chains being
disrupted if the UK leaves the EU without a deal.

~~~
YjSe2GMQ
Not sure why you're being downvoted, as this will be a huge, sad show when it
starts. The Economist's article on no-deal disruptions:

[https://www.economist.com/briefing/2018/11/24/what-to-
expect...](https://www.economist.com/briefing/2018/11/24/what-to-expect-from-
a-no-deal-brexit)

They even stockpiled paper themselves to survive the potential disruption and
keep on printing:

> _Disclosure: The Economist is stockpiling around 30 tonnes of the paper on
> which the covers of our British edition are printed, which comes from the
> Netherlands._

~~~
lolcat5e
It's impossible to make a factual statement on BREXIT without drawing flak.
(I'm actually in favour of it for long term reasons) But the car industry and
fresh food supply chains will be affected in the short term. How can they not?
It'll be a good test of the article's thesis.

------
m3kw9
Efficiency that can adapt to change would be even better

------
amai
Inefficiency safes us from over-optimization. And over-optimization is the
biggest problem of our capitalistic society. A lot of time people think only
about optimizing and reducing cost etc. But they fail to recognize that one
can over-optimize things as easily as under-optimize them.

So control your greed, don't over-optimize the money you can make from your
customers. Otherwise your marvelous growth will break down immediately when
the climate just changes a little bit. Think about what happened to
[https://en.wikipedia.org/wiki/Kodak#Shift_to_digital](https://en.wikipedia.org/wiki/Kodak#Shift_to_digital)
.

------
crdrost
This is a very important article.

It's also absolutely vital to understand that efficiency is _not_ subject to
the greedy algorithm. If you try to make every single part of your
organization efficient, you will kill your organization.

It is easy to understand this if you are a software developer, because you get
to deal with servers. If you run a server at 100% load (for any of the
definitions of load -- 100% CPU, 100% memory, 100% bandwidth utilization) what
happens? Obviously very often something will eventually mechanically fail, but
what happens before that? You see a latency spike -- 100% load is practically
the definition of how DDoS works. The efficient utilization of the one
component translates to a globally worse outcome, roughly because the
component you're optimizing for is not the "right" component. Similar things
often happen when a company decides to fire a worker and rebalance their load
across their peers: very often this looks very attractive on paper at first
since you save costs on a salary, but then it starts to hit the remaining
deadlines pretty hard, a swift kick right in your revenue stream. Which is not
to say "never fire anyone", but just that a lot of folks do not evaluate this
sort of effect on their cashflow position before they start issuing layoffs. I
know of one company (but not directly, I was not a part of this, take my words
with a grain of salt) which had an office in NYC but got into a bit of a tight
position and was acquired strategically by a company in Chicago, but got
caught in this death-spiral. After a year or two the entire East Coast NYC
office was closed and the new Chicago CTO had to drive a U-Haul from NYC to
Chicago with whatever supplies he could salvage, and the whole company was
shrunk to just two or three folks working out of the sister company's office
in Chicago -- I don't know if they had to relocate from Chicago or were new
hires, but if they relocated then it was presumably on their own dime. All of
this when the situation seemed quite tractable and solvable before those first
layoffs started. There may have been some problems that I don't know about,
like the company might have been in a much worse shape than advertised, so
again please take my analysis with a grain of salt, but my understanding is
that the latency caused some already a-little-dissatisfied customers to bail,
which caused another round of layoffs which then caused a bunch more latency
which caused many customers to get really extremely pissed and bail, which
caused the closing of the NYC branch, so that only the least-dedicated
customers who were the least likely to notice the changes were left as a
trickle of the former revenue.

It works the other way, too. The above is about "inefficiency is excess
capacity is fat," and warns that fat is biologically necessary to cushion you
against biological variability. But very often you don't just "trim" the
excess capacity, you make more work so that you can utilize a resource to 100%
capacity. So instead of just shrinking your server, we're talking about the
equivalent of pre-calculating as many web pages as possible so that you can
serve them from a cache. This can be a great idea, in moderation -- it is an
awful idea if you drive the server or cache to 100% load in this way, again
because of latency spikes on the inevitable unforeseen loads. On the human
level, this is not a layoff but rather yelling at folks who are idly talking
to each other, "you lazy folks, wasting the company's time!" ... And in
addition to losing latency because your one-off requests now have to wait for
a developer to task-switch, you lose oversight of your organization. Everyone
is "busy" with something, but mostly it's something unimportant, so it is
harder to just see "here are the places where people are stuck on important
tasks" so that you can intervene. Say your caching-server is very well-behaved
and yields to any other requests you have, but it periodically drives the
database load to 100%: now whenever you are looking at your system as a whole,
you are desensitized to any other requests that drive the database load to
100%. "Probably just the caching server caching that one expensive page" \--
but no, it's not, something is seriously wrong and someone is suffering
quietly because of it.

------
shoo
i think there is generally a tradeoff between optimising for efficiency at the
cost of robustness, or optimising for robustness at the cost of efficiency.
e.g. one way to get more robustness in IT systems is to add redundant
infrastructure. there's even more robustness if the redundancy is decorrelated
: e.g. decorrelated in space (multi-region) or decorrelated in terms of
vulnerability to other risks (
[https://rachelbythebay.com/w/2011/10/27/monoculture/](https://rachelbythebay.com/w/2011/10/27/monoculture/)
). it's clearly cheaper to build this stuff in the short run if you dont
invest in any diversity. have one of a thing, or a monoculture of 12 things.

There's also another aspect: often becoming more efficient at doing something
is at the cost of investing capital and increased ongoing maintenance costs to
maintain the new systems required for the efficiency. maybe achieving the
first efficiency takes low investment for relatively large reward - a "low-
hanging fruit", but the second or the third one give diminishing returns, i.e.
more cost of capital or ongoing maintenance drain versus the reward, but still
enough reward to be worth doing. then if the context changes -- so the
specialised task is no longer worth doing -- you're left with the upkeep &
opportunity cost of all of this now pointless specialised infrastructure.

Joseph Tainter argues something vaguely along these lines for civilisation
collapse, with the diminishing returns of increasing efficiency from
increasing social complexity:

> For example, as Roman agricultural output slowly declined and population
> increased, per-capita energy availability dropped. The Romans "solved" this
> problem by conquering their neighbours to appropriate their energy surpluses
> (in concrete forms, as metals, grain, slaves, etc.). However, as the Empire
> grew, the cost of maintaining communications, garrisons, civil government,
> etc. grew with it. Eventually, this cost grew so great that any new
> challenges such as invasions and crop failures could not be solved by the
> acquisition of more territory.

> In Tainter's view, while invasions, crop failures, disease or environmental
> degradation may be the apparent causes of societal collapse, the ultimate
> cause is an economic one, inherent in the structure of society rather than
> in external shocks which may batter them: diminishing returns on investments
> in social complexity.

[https://en.wikipedia.org/wiki/Joseph_Tainter#Diminishing_ret...](https://en.wikipedia.org/wiki/Joseph_Tainter#Diminishing_returns)

Nassim Taleb has written a bit about robustness, fragility, "antifragility",
of investments or occupation, if you can deal with the writing style it's
worth reading a book or two. E.g. the concept of a "barbell strategy" to
diversify: [https://www.nuggetsofthought.com/2018/04/02/nassim-taleb-
sen...](https://www.nuggetsofthought.com/2018/04/02/nassim-taleb-senecas-
barbell) .

Taleb also spends some time writing about the difference between the realised
outcome and the distribution of possible outcomes --- the focus should be on
the process -- was the decision making and understanding of the probabilities
involved sound --- not on the outcome . Applying this perspective to the bird
and the mammal in the farnham st blog, before the change in environment, if
focusing on realised outcomes, we might rank the bird population as more
successful than the mammal population -- perhaps there is a larger population
of birds, or they get more leisure time, or whatever. but when trying to
assess the unrealised outcomes, we might conclude that the bird population is
in a far less robust position, their outcome to changes in environment or
context is much worse than the corresponding outcomes for the mammals. so in
some sense, not focusing on the current realised outcome, the mammals are
"doing better" even before the turquoise-berry-bush catastrophe.

~~~
orzig
The point about how quickly investments can turn into liabilities is really
insightful - thank you!

There's a social aspect to it, as well. Within one person's head, it's Sunk
Cost fallacy. I'm not sure what the term is when the fear of change is spread
across multiple groups (coordination problems? I'm open to ideas). But it's
clearly a huge problem for complex systems that must manage change.

------
merlincorey
I haven't yet read the article, but the title immediately made me think of
Paul Graham's Doing Things that Don't Scale[0].

[0] [http://paulgraham.com/ds.html](http://paulgraham.com/ds.html)

~~~
merlincorey
After reading it, it also has some "perfect is the enemy of good" messaging.

It also purports that being too efficient (and therefore specialized) reduces
agility and can be dangerous by preventing pivoting.

~~~
sitkack
Overoptimized systems often lose resilience and become fragile. Slop provides
a buffer for give and take. This is analogous to materials that have a wide
plastic region vs materials that have very high rigidity but a small plastic
region.

