Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a very important article.

It's also absolutely vital to understand that efficiency is not subject to the greedy algorithm. If you try to make every single part of your organization efficient, you will kill your organization.

It is easy to understand this if you are a software developer, because you get to deal with servers. If you run a server at 100% load (for any of the definitions of load -- 100% CPU, 100% memory, 100% bandwidth utilization) what happens? Obviously very often something will eventually mechanically fail, but what happens before that? You see a latency spike -- 100% load is practically the definition of how DDoS works. The efficient utilization of the one component translates to a globally worse outcome, roughly because the component you're optimizing for is not the "right" component. Similar things often happen when a company decides to fire a worker and rebalance their load across their peers: very often this looks very attractive on paper at first since you save costs on a salary, but then it starts to hit the remaining deadlines pretty hard, a swift kick right in your revenue stream. Which is not to say "never fire anyone", but just that a lot of folks do not evaluate this sort of effect on their cashflow position before they start issuing layoffs. I know of one company (but not directly, I was not a part of this, take my words with a grain of salt) which had an office in NYC but got into a bit of a tight position and was acquired strategically by a company in Chicago, but got caught in this death-spiral. After a year or two the entire East Coast NYC office was closed and the new Chicago CTO had to drive a U-Haul from NYC to Chicago with whatever supplies he could salvage, and the whole company was shrunk to just two or three folks working out of the sister company's office in Chicago -- I don't know if they had to relocate from Chicago or were new hires, but if they relocated then it was presumably on their own dime. All of this when the situation seemed quite tractable and solvable before those first layoffs started. There may have been some problems that I don't know about, like the company might have been in a much worse shape than advertised, so again please take my analysis with a grain of salt, but my understanding is that the latency caused some already a-little-dissatisfied customers to bail, which caused another round of layoffs which then caused a bunch more latency which caused many customers to get really extremely pissed and bail, which caused the closing of the NYC branch, so that only the least-dedicated customers who were the least likely to notice the changes were left as a trickle of the former revenue.

It works the other way, too. The above is about "inefficiency is excess capacity is fat," and warns that fat is biologically necessary to cushion you against biological variability. But very often you don't just "trim" the excess capacity, you make more work so that you can utilize a resource to 100% capacity. So instead of just shrinking your server, we're talking about the equivalent of pre-calculating as many web pages as possible so that you can serve them from a cache. This can be a great idea, in moderation -- it is an awful idea if you drive the server or cache to 100% load in this way, again because of latency spikes on the inevitable unforeseen loads. On the human level, this is not a layoff but rather yelling at folks who are idly talking to each other, "you lazy folks, wasting the company's time!" ... And in addition to losing latency because your one-off requests now have to wait for a developer to task-switch, you lose oversight of your organization. Everyone is "busy" with something, but mostly it's something unimportant, so it is harder to just see "here are the places where people are stuck on important tasks" so that you can intervene. Say your caching-server is very well-behaved and yields to any other requests you have, but it periodically drives the database load to 100%: now whenever you are looking at your system as a whole, you are desensitized to any other requests that drive the database load to 100%. "Probably just the caching server caching that one expensive page" -- but no, it's not, something is seriously wrong and someone is suffering quietly because of it.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: