Metrics-driven organization (aka KPIs or what the author calls Spreadsheet mentality) is what I do for a living: I’ve seen companies die by it, and companies succeed thanks to it. It’s meant to do one thing: clarify everyone’s role in a large, confusing, often political organization where success wouldn't otherwise be clear. If that’s not a large, systemic issue that even the best would-be cult leader among your CxOs can’t address, you need to have KPIs for things to be clear (on top of the best would-be cult leader among your CxOs to sell the idea, the hard choices in there and the vision unpinning it).
Imagine you work for the customers' support team: you unblock the client, the client is happy, and you also fix the public doc to unblock anyone after you… That’s all great. But at large organizations (namely: Google, Facebook, and LinkedIn) people ask: should we have a CS team at all? Isn’t that objective unscalable and trying to satisfy people a feature we shouldn’t have because every solution should be a product, not a process?
Unless you are working at companies where that question is legitimate, you don’t need KPIs formalized into a tree structure: you need objectives, clear goals, praises for unexpected great work, and support to make sure that people who are asked to do things can. The MECE tree is useful if you don’t want to miss something. For smaller structures, the gaps are a feature: you can’t do it all.
If you are in operations, regardless of what kind exactly, a proper set of metrics is a basic necessity to succeed. Without data, and proper metrics built around them, you are running blind and every single decision, no matter how well informed you think it is, is just based on gut feeling. Granted that gut feeling can be good (I dare to think mine isn't to bad) but there is no way to tell whether your gut is right without metrics and data to back it up.
Pretending otherwise is just lying to yourself. It is very easy so, to convince yourself you actually have good metrics and data. That could be called "Spreadsheet Mentality", even I think actual spreadsheets are way better than highly sophisticated data analysis tools.
In addition to this, metrics do not need targets. repeating for emphasis: metrics do not need targets. You absolutely need measurements and data, and to highlight trends.. but target metrics are almost always bullshit.
It reminds me of a psychology study done on kids that I wish I had the source for. IIRC when asked to perform a task for no reward (like, how many simple math problems can you solve in a minute?), many performed it well. then they gave the good performers gold stars. Immediately, the kids not getting stars started performing worse than originally. Then they took away the gold stars. Now the top performers started doing worse as well.
People just want to be seen and heard. Measure the things that are important WITHOUT deciding in advance what is considered a "good job". this way you can do your 1:1 on the whole picture. "you're slow, but your quality is exceptional"
The only exception to this is if the work is simple enough that you can measure it directly against your $$$, and you have a target that ensures you are green. Like maybe a strictly data entry position for a very consistent source of data. In this case, there arent 1:1's to be expected with your employees.. they either hit target or get fired.
I hypothesize without much direct experience that the executives of an organization should come to see excessively optimized metrics as a red flag. If what you are measuring has been optimized to the n'th degree that is sufficient proof that something you are not measuring is suffering. There is always something important you are not measuring. It may be something measurable you aren't measuring today, or it may be an unmeasurable, but there's something.
Creating a new metric and seeing it go up for a year or two is good. Creating a new metric and seeing it go up and up and up past that ought to be considered suspicious. Not automatically "wrong", but something to dig into, rather than celebrate on cognitive autopilot.
There are also diminishing marginal returns of optimizing for a certain metric. The first big steps are relatively easy, and produce a good return on the investment, but as you get deeper into the optimization, the costs of getting that next little bit go up. And people are tempted to cheat, and take short cuts to meet the goal.
For example, getting from one nine of uptime to two nines isn't all that hard or expensive (a month of downtime per year to half a week). Getting from three nines to four nines is a lot harder, and a lot more expensive, and if you tie success at the organization to that improvement, people will fudge to reach it.
Nearly every metric is like that. Amount of sales, tail latency, recruiting efficacy.
True, if a metric has a target the metric is the target. If there ever was an incentive to game a KPI, that is it.
SPC is tricky case in that regard. Because it, kind of, makes the trend and long term development of metric the target.
Edit: You had me thinking. Settng a target, or rather target tolerances, for a metric can be necessary if that metric is directly measuring the output of one process that is used by another process. E.g. forecast accuracy, if downstream ops are expected to cope with, say, a deviation of 20 %, that tolerance kind of becomes the goal for the forecast accuracy. This is not some arbritary goal so.
For me, setting limits on certain metrics is something different from a target. It is a thin line between those.
> if downstream ops are expected to cope with, say, a deviation of 20 %, that tolerance kind of becomes the goal for the forecast accuracy.
I think a forecasting target of 20% shouldnt be necessary unless you want to be around 19% deviation. Which might be the case if trying to reach 0% has diminishing returns and isnt actually cost effective. In other words, targets make sense when you are literally targeting a sweet-spot.
If 0% deviation is desired, then the directive is still the same - try to be as accurate as possible. If "as accurate as possible" is consistently falling below 20% (which is the importance of measuring and data) then you need to address that. Maybe forecasting is broken or infeasible.
On ops side, they might not be expected to handle deviations over 20%, but the directive is probably the same either way "do the best you can to manage the deviation".
>For me, setting limits on certain metrics is something different from a target. It is a thin line between those.
I agree with this in general. It can be tricky to communicate "here is a boundary that absolutely cannot be crossed, but keep in mind you should be nowhere near that boundary anyway" without it impacting peoples personal optimization strategies. You really have to weigh the consequences of what actually happens when that limit is reached, and if it's worth a possible depression of performance from the team.
>> I agree with this in general. It can be tricky to communicate "here is a boundary that absolutely cannot be crossed, but keep in mind you should be nowhere near that boundary anyway" without it impacting peoples personal optimization strategies. You really have to weigh the consequences of what actually happens when that limit is reached, and if it's worth a possible depression of performance from the team.
Oh definitely.. but a good leader could still fall into some traps of thinking they can offload some of their good leadership decisions into targets. and then employees feel judged rather than seen
> study done on kids that I wish I had the source for.
There are a lot of them with similar results, going back to 1960s.
Kohn "Punished by Rewards" is a good general start for that stuff. I think there was some generalised study/review published in late 90s. Probably Deci and Ryan (?).
I think what the author meant by "spreadsheet mentality" is not the belief that metrics may be necessary, but that they are (more or less?) sufficient.
That’s the distinction that I make between having a complex set of KPIs that cover all the organization (which tend to alienate any challenge, what the author laments) and simple team objectives (that are only relevant for the team, and intentionally miss a lot of the complexity in the rest of the organization).
If you metrics system fails to connect goal and performance measurement on the team level witu the bigger goals, I'd say the metrics systems is badly degined and implemented.
Connecting things it's possible to usefully measure with the bigger goals is an insurmountable problem in almost every business, because some things are much easier to measure than others.
For example, it's very easy to measure if the call centre answered a highly efficient N support calls per worker-hour.
And it's very difficult to measure if people who deal with the call centre end up badmouthing your company to their friends, because your useless call centre always just tells them to reboot and never does anything to resolve the underlying issue.
Great example of a bad metric. Even if the dev would work for sales, he would at max drive sales indirectly. And if something is only influenced indirectly it is a bad metric use case.
I would argue that software maintenance would also benefit from metrics. Heck, even development would. Unfortunately, maintenance is hardly ever done at all and sensible metrics for development are really hard to come up with.
+1. Metrics for systems have very clear benefits. It lets you know what's wrong and because systems are technical things, they (generally unless you run up against a work political boundary) have clear solution paths. People are nothing like technical systems.
> Imagine you work for the customers' support team: you unblock the client, the client is happy, and you also fix the public doc to unblock anyone after you… That’s all great. But at large organizations (namely: Google, Facebook, and LinkedIn) people ask: should we have a CS team at all? Isn’t that objective unscalable and trying to satisfy people a feature we shouldn’t have because every solution should be a product, not a process?
First on the "scalable": CS is most definitely scalable, the question is how much. Without using any tooling whatsoever I dare say it scales in O(N*(log N)^2). I can understand the will to optimize it, and with good tooling, I have no doubt it can become sub-linear, which is more than enough. Anything sub-linear wins money with growth, so no need to "make it a product" (which I read as: Put a team of O(1) to replace those people, but maybe I'm misunderstanding)
Anyway on to my point: Your Google example proves TFA's point: Google having broken CS is one of the major pain point against Google most people here have. Another Google pain point can probably be explained by misuse of KPIs: Google products that keep getting killed, and replaced, again and again.
Even the best people are subject to "the spreadsheet mentality". Even when they are aware of it! My employer's founder bought back the company to stop focusing on the company's share price. Musk would like to take Tesla private. France renationalized EDF which was already >80% own by the state. etc.
I suspect The Spreadsheet Mentality (when it comes to people management) is reinforced by the radical blank-slate/egalitarian civil rights law and enforcement culture relating to employment in the US. There are many legal landmines around Officially Noticing and Considering various differences between individuals, so thinking of people as interchangeable cogs is the most legally defensible approach.
You’re using super obfuscated language so it’s hard to follow what you’re actually saying. It sounds like you’re saying because you can’t evaluate people with lazy mental shortcuts like race or religion, which would be illegal, you have to try and determine their actual merit, which is harder, and that is the source of corporate spreadsheet hell. Consequently, businesses would spend less time in spreadsheet hell if they could just go back to discriminating against classes of people that are now protected by law. Is that really what you mean?
It's possible to read the post you're responding to a lot more charitably than that, and I think it would be more along what HN aspires to be to do so.
I understood his point to be that due to the heightened scrutiny given to all questions of discrimination nowadays, any sort of "holistic" evaluation was abandoned in favor of supposedly objective "Spreadsheet Mentality" evaluations.
This certainly has advantages, not only against conscious discrimination, but also unconscious biases (the infamous "team fit" ending up in homogeneous team composition). But it has large disadvantages as well, in that all employee contributions which do NOT easily fit into the few "objective" metrics are disregarded, which can lead to employees' value to an organization being substantially distorted. I find the rating scales I've seen a highly imperfect fit to the entire contribution an employee can bring.
And the underlying assumption that such rating scales are objective and not subject to manipulation or biases is highly questionable. Ultimately, it all comes down to a manager's judgement, and a bias does not disappear if you dress it up with a numerical weight.
> I understood his point to be that due to the heightened scrutiny given to all questions of discrimination nowadays, any sort of "holistic" evaluation was abandoned in favor of supposedly objective "Spreadsheet Mentality" evaluations.
This is exactly how I read it. Effectively, if you fire someone because you don't think they're doing a good job, you might be accused of racism/sexism/ageism/etc; even if your analysis is completely correct (and, presumably, there was no -ism involved). If you have a spreadsheet (effectively, a paper trail) of their work accomplishments, you're much safer.
precisely, which ends up (in a large legally threatened corporation ) causing hiring and HR to be a racist quota system in a spreadsheet mentality with all kinds of
unspoken and unrecorded workarounds in reality
I understood both readings of my post and left it vague.
For instance: consider the plight of an all-white all-Muslim company who would face a possible lawsuit for not hiring a black guy who says he loves to eat well barbequed Southern-style pork in the office for lunch daily. Is hiring discrimination in this case racial, religious, dietary or “team fit”?
To me, any answer other than “it is none of your damn business why we hire anyone” is against the First Amendment’s assembly clause.
Oh, so you're intentionally mixing up traits that are intrinsic with actions or beliefs that people choose. It looks like you're trying to construct a 'gotcha!' and I dislike it.
I think he's saying the opposite - you have to work backwards to ensure that your results don't end up such that you can be accused of having done so, regardless of how you actually came to the conclusion.
Yes, that is what I mean. I also genuinely believe that if we just go back to the First Amendment default of complete freedom of contract and freedom of association (no legally protected categories) that the market will sort things out just fine. Racist companies will lose to companies that hire competent people of all races.
The remedy of the existing civil rights legal regime was much too strong and too far-reaching for the disease of Southern racism, which has largely been eliminated anyway.
> But at large organizations (namely: Google, Facebook, and LinkedIn) people ask: should we have a CS team at all?
I'd risk stating a thesis that in metrics-driven organisations these kind of questions are actually not asked.
Such organisations get very busy defining tree-based KPIs (OKRs or whatever you want to call them) and even busier tracking them.
A lot of effort goes into maintaining the KPI driven culture and in the end there's no one left to ask the question like "should we have a CS team at all?".
Bad metrics can wreck corporate cultures. Let’s attack the other straw man: metric-free orgs.
Symptoms:
- Resources (and credit and promotions) are given to the most articulate and outgoing rather than the most effective. (Marketing budget goes to the extrovert rather than the manager producing the best ROI)
- When dates slip, they slip a lot. The night before a release it’s announced that there will be a 3 month delay. (Bad for anyone having to explain “If it’s 3 months, why couldn’t you tell me that sooner?” to a customer)
- Unanticipated earnings and revenue misses. (Same issue as above, but bad to anyone talking to investors)
- Poor prioritization. Effort is spent on squeaky wheels versus high value problems. (Engineers spend time helping “that nice person in accounting” versus solving a customer problem that brings in hard money)
With all the things an exec needs to worry about, they can’t effectively operate without good data to drive decisions. Data needs to have human interpretation as well. Of course bad data can be worse than no data, but that’s another story.
> Resources (and credit and promotions) are given to the most articulate and outgoing rather than the most effective
Somewhat of a random anecdote, but the first time I realized this was when I was in HS. I participated in what you may call a "hackathon". It was really just a competition open to HS kids that involved building an app to a spec.
I was lucky I have a friend with absolutely amazing artistic abilities, who did a lot of work on our UI and logo and I confidently say we had the best work out of everyone we completed with, both in terms of appearance and functionality.
But when it came time to present our work, I noticed something, while I never doubted our technical competences basically everyone before us had public speaking ability on the level of a politician. Nearly everyone else slaughtered us in terms of being able to present their work to the judges.
At the end of the day, our technical and artistic abilities paid off well enough, placing us just barely high enough to move on to another level, but it was clear, even if we were technically solid, we had approached the competition all wrong. We probably could have placed higher with a worse application, but better presentation.
This is true of most hackathons in my experience, just to a lesser degree since each team is generally free to make whatever solution they see fit. It's all in the pitch–you can pitch an entirely theoretical app you made a couple UI mocks in Figma for or you can develop the app end-to-end. I've seen both methods be successful.
I personal prefer the route of making the app end-to-end, you get a big "wow" factor with judges when you let them drive the laptop and take it for a spin rather than just following the happy path yourself. But I also make sure to set aside time to practice pitching since a bad pitch will effectively ruin our chances.
This is a great anecdote. You have got to be able to sell. If you can't–or you can't hire someone who can–you're dead in the water. It doesn't really matter how good your product is.
The problem with selling is you have to know who you are selling to.
I once developed a tool that saved literal days of time every month by automating some very rote workflows. It was appreciated by everyone who had to do the work by hand.
When I tried to sell it to my bosses boss, I realized he literally didn't know a lick about our workflows, where the inefficiencies are, and how they could be remedied.
We could've been making toothbrushes for all he cared.
After a while, I pulled out the super fake looking 'bad bar goes down, good bar goes up' slide and his eyes lit up.
>After a while, I pulled out the super fake looking 'bad bar goes down, good bar goes up' slide and his eyes lit up.
This, right here, might be the most important lesson. Figure out the 1 or 2 things your boss's boss cares about. Show a single slide where the bad things go down and the good things go up. Have a couple of technical nerds that are trusted and agree with you. Bam. 10 minute promotion right there.
Oh. I'll add something. I'm the boss right now. I spend a lot of time trying to make it clear to my team what I care about so they can give me that 1 slide presentation.
I also spend plenty of time learning from my team why they care about stuff that I don't because they often care about important stuff (and I should, too)
If you've ever had to sit through a presentation by an Oracle or SAP sales team, you know presentation matters more than quality :)
I actually envy their ability to sell, even when I know their product is crap and I resent being made to use it by executives who bought a slick sales pitch.
I'm not completely convinced. I'm sure it always helps, and I'm sure in many cases it's a necessity in order to sell the product.
I'd like to believe that there are some products that are good enough and solve pressing enough issues so as to sell themselves. It's probably a naive hope, but I like to cling to it.
I wish there were enough honesty in sales as to not leave such a sour taste.
In days of yore, I worked for a company where the product basically did "sell itself".
Over 90% of people who did a demo/trial run bought the product.
Sadly, technology progressed, and the entire market went away. The same "Magic Quadrant"/Market space that Gartner made explicitly because of that product was retired some time back due to not being relevant.
I get the rationale, but it's disproportionately easy to oversell a product than live up to sales hype. I'm assuming a scenario where sales hype is on the verge of being detached from reality. It's an abuse of both customer trust and developer trust.
There's probably some flip side of the coin where sales is being honest about what they were _told_ was delivered, but the product doesn't in actuality live up to what was committed to. In which case the development team should either up their QA standard or adjust what they've committed to developing.
I'm sure there's some possible healthy relationship where there's some sales hype that's used to motivate product improvement, but I haven't personally experienced it.
The Oracle model that everyone copied was “Oversell and eventually you can hire the engineers to make your product actually better than the competition”
It worked so well (where are Sybase and the other RDBMS makers?) that everyone else copied the playbook.
For intangible products with high switching costs it’s just more profitable for vendors to lie.
This low integrity process has taken me a long time to come to grips with, and makes me especially prudent as a buyer.
I certainly understand why you think it's naive -- I even think it's naive, and it's my hope. I don't understand how it's dangerous though, would you mind expanding on that?
> But never ever expect solutions to be self-evident.
I certainly don't _expect_ anything. I just hope that there are products out there that don't need sleazy sales tactics, and let the product show it's worth
hope != expectation. i expect to live a life with no shortage of disappointment ;)
Dangerous may be an overstatement, but maybe not: I think what I was really trying to say was something along the lines of:
In my opinion, right now in IT, "laypeople" don't have a good understanding of how bad things are in many respects, and how much better they could be in others. Moreover, there are a lot of effective "salespeople" in this space, and all this adds up to what I believe is a huge gap in perception and reality here. So the last thing I want is more belief that great IT comes along and saves us?
Eh I think this can be true but there are also projects that just build great open source tools with minimal selling and slowly win over engineers then maybe up sell them a bit later.
These kinds of projects usually withstand the test of time where flashier projects often burn out because they aren’t good in practice.
Good engineers are always wary of flashy things and keep an eye on substance
Not really though. It's way more important to know what skills you lack. If you can't sell, make sure someone is with you for the ride who can. Sometimes these problems can easily be solved.
The famous "reality distortion field"! Deploying Jobs' charisma so effectively, call the Cupertino folks lucky that he didn't recruit for a semi-religious sect instead.
Bill Clinton was one of the most charismatic people ever in US politics. I am happy he never turned to the dark side - he could have caused an awful lot of damage. I was never afraid of what he'd do while he was in office, though I often disagreed with him. He never seemed to want power for power's sake, unlike too many other politicians. As far as I can tell, he just wanted people to like him.
If you've ever merely been in a room with him, regardless of size of the crowd, this is hard to understate. I like to think of myself as a rational scientific kind of person, but I swear, he does something to huge rooms of people that feels like magic. I 100% felt the entire room get happier before I even knew that the reason for it was Bill was making his way through.
Yup, that's what I've heard about him. It's even more apparent when he's contrasted with his wife. He's tried to get her to take likability training, and I know she's taken body language training (you can see it by comparing video of her over the years), but it hasn't really gelled for her.
Bill would never, ever have made the "deplorables" gaffe.
Good analogy and there are a lot of great points mentioned in sibling threads. There are so (so) many "quality" software projects out there which aren't tirelessly promoted, though (e.g. npm dependencies). There is potentially a (strong) bias towards "extroversion" in the "Western world|US" but I somewhat think it's just "our perception" that this is optimized for in organizations/individuals. This is fuzzy and there are a lot of counter arguments. When it comes to "marketing" and "building a product" it seems, ideally, they pair with synergy.
I agree with you completely except on one part. Metrics is not the only solution to this. The other option, more difficult but more effective if you can pull it off, is for individual contributors to not make these mistakes in the first place.
Having executives fix the bad decisions by individual contributors is sort of treating the symptoms rather than the underlying cause. It adds extra work because now everyone has to produce numbers about their work in addition to work -- and some of the numbers might not be better than tea leaves, statistically speaking.
So how do you get ICs to make the correct decision? Training, ownership, free flow of information about strategy and market, involving everyone in setting the direction, etc. The opposite of the instinct of an executive going by the numbers.
----
To stave off an eventual misunderstanding: I'm not saying you shouldn't measure what you do, operationalise your definitions, define in advance when you reject your hypotheses etc. I'm just saying that producing these numbers for yourself and your peers is far more effective than doing it for someone removed from the day-to-day business because they don't have the same context and nuance.
> is for individual contributors to not make these mistakes in the first place.
Exactly. Is the individual contributor working on something different than what the metrics would describe because they are dumb and irrational? Or because their experience, domain knowledge, and good judgement give them an insight that the KPI wouldn't? People certainly can be dumb and irrational, but we should at least be open to the possibility that people who are up to their eyeballs in a problem domain know something about what "better" and "worse" look like that a number-cruncher doesn't.
I agree with everything you say. And bad numbers are awful. At some point of organizational size, executives can’t stay on top of everything and make good decisions without quantitative metrics.
That doesn’t mean making everyone full time metric gatherers. It doesn’t mean looking at the data uncritically. It doesn’t mean ignoring what you can’t measure. It does mean smartly measuring what you can.
The alternative solution is that executives don't make decisions, only coordinate between the people on the ground who have the necessary context to make decisions.
That works for 80% of the problems, and something that should be done. Some problems require optimizing for the whole rather than individual teams or departments. How do you allocate capital between business units if you aren’t measuring ROI? How do you decide what practices work best if you’re not measuring how people work and what they’re doing?
Not all functions are best evaluated through ROI, but if you're an executive whose hammer is ROI you'll start to look at everything as a liability. Free markets are generally better at allocating capital than central control. It's also more flexible in adapting to different measures of success.
Standardising work is especially important that it's done by the participants in said work, and not dictated from above. That can be done democratically.
Agree. Very few execs are good enough to base capital allocation decisions on gut feel alone though. “It feels like we are short on internal audit” versus “a firm our size should be doing these 10 audits per year. An audit takes $y and if we don’t do it, the consequences are measurable with Y….”
You keep saying you agree with me that executives should not make this type of decision and then you somehow circle back to how we can enable executives to make that type of decision. That's the opposite of what I'm saying! I'm not sure I understand what's going on there.
> The Spreadsheet Mentality comes from the age-old executive wisdom of “Only what’s measured is what matters.” As a result of that wisdom, the underlying assumption becomes: “If it can be tracked, it’s important. If it can’t be tracked, it’s less important.”
A better formulation, that more accurately describes what happens, is: if you can't track what's important, you think that what you can track is important. This leads to total catastrophes (such as, for instance, the beginning of the Vietnam war where the number of enemy deaths became the measure of success).
"What can be tracked" doesn't matter; what matters is what matters. If that can't be tracked, well then we need to find a way to track it. It can be hard, but it's almost always possible.
I agree with you. Don't confuse the map with the territory.
There needs to be someone, who can combine and explain the reasoning behind all the numbers. After all, they should serve a strategy.
For example, you could judge an iPhone by its specs and/or by using it. Spec wars lead to Intel boasting GHz numbers, while Apple focused on customer experience, where a fast CPU is only part of the equitation.
There were certain areas that seemed unquantifiable before reading this and it has some good suggestions for how to tackle things like measuring risk and other intangibles.
I think it's very important to remember that McNamara was one of the most brilliant minds of his generation. The "smartest guy in the room". In any room.
But the mind is just a tool. If you don't know where you're going, all it's going to do is get you lost faster.
I am a very organized individual – as a freelancer I have to be. But I have yet to meet any organization where I don't feel they are an unorganized mess. Naturally communications get more complicated when there are more people, sure. But I also worked on film sets where teams of people who got to know each other a week ago work with military discipline towards a common goal, with a clear team structure and efficient communications. So: individuals can be well organized. Small teams can be well organized. But what about bigger structures?
I think in bigger organizations organization increasingly becomes an issue of aligning the incentives of all people involved. If you are the spreadsheet person in such a corporation and your higher ups have you incentiviced to cover your ass and follow a fixed and unchanging bureaucratic process, you will not want to adjust to changing requirements. If the make up artist on a film set treated what should be a short brush up between two pictures the same as this, the whole production would grind to a halt, because everybody waits for the make up artist. The issue with the spreadsheet person is that their higher ups typically also care more about the numbers than the actual thing/project being done.
If getting the numbers, reports, sheets whatever becomes more important than doing the actual thing in a good way, your organization is in trouble. That does not mean those numbers, reports, sheets are not important. But they serve a function for the project, they are not the goal of the project. If they are thegoal of the project, you are doing it wrong.
>But I also worked on film sets where teams of people who got to know each other a week ago work with military discipline towards a common goal, with a clear team structure and efficient communications.
Of course, there's no shortage of horror stories about film shoots.
But, in general, you've got well defined responsibilities. There's been a huge amount of planning before people get together to shoot. It's very time bounded. It is very military in a lot of ways and quite different from how a company operates in general.
This sounds very similar to the thing where people start caring more about the organisation than the goals of the organisation, and inevitably end up in power without caring one whit about those goals.
The clue is right there - they don't care about the org, they care about their own personal power and status within the org. Not least because that often translates directly to earnings.
But also not exclusively. Because for some people the point of work isn't to do a good job, it's to be superior to others.
I'm increasingly convinced bureaucracy is a power flex. Making it difficult/impossible for some people to do their jobs is a way of keeping them in their place. They're not a threat while they're tied up in frustration.
But it's also a way of confirming the status of those who removed their personal agency.
It's similar to cost structures. It would be insane to consider the engine room of a military vessel as a cost centre, because it literally makes it possible for the vessel to do its job.
But ops are considered a cost centre because "ops doesn't generate a profit."
Of course it generates it profit. In fact it generates most of the profit. Remove ops and most of the profit disappears.
But it also usually has low political status. Getting labelled a "cost centre" is a way to force the rest of the power structure to think of it like that.
> I'm increasingly convinced bureaucracy is a power flex. Making it difficult/impossible for some people to do their jobs is a way of keeping them in their place. They're not a threat while they're tied up in frustration.
There's a few sociopaths who create some fraction of bureaucracy with that purpose, but it doesn't sufficiently explain the sheer volumes of it.
The more plausible explanation, to me, is that people create bureaucracy for two reasons:
1. To protect the firm from bad outcomes. Someone, at some point, screwed up, and burnt the firm, or allowed an external third party to burn the firm, and bureaucracy developed as institutional scar tissue to protect against this happening again.
2. To protect their jobs. If they aren't involved in something important, there's no reason for the firm to employ them. If they can wedge themselves into a business-critical workflow, they can increase their importance to the firm, without actually having to do anything useful.
Does that thing have a name? I am speaking to a group of executives at a local institution about this specifically at their organization next week, and can't come up with a snappy name for that.
>> You can probably figure this out on your own, but I’ll walk you through it a little bit. In simplest terms, The Spreadsheet Mentality comes from the age-old executive wisdom of “Only what’s measured is what matters.” As a result of that wisdom, the underlying assumption becomes: “If it can be tracked, it’s important. If it can’t be tracked, it’s less important.”
That part of the article is just wrong. It is not wisdom that is described here, but a mentality. And yes, that mentality does exists, and that is the "Spreadsheet Mentality". There is a German saying, "wer viel mißt, mißt Mist" (rough translation: whom measures a lot measures shit. Well, same mentality, but it only means that:
- you don't know how data can be used
- you don't know how metrics work
- you don't know what to measure
I believe this is why the author referred to it as "executive wisdom". As in, executives often believe or were taught this mentality as wisdom. The way its used, it reads as a bit of a euphemism.
"Only what’s measured is what matters" is much more sweeping than what I've usually heard, which is, "If you can't measure it, you can't manage it."
I understand that managers may conflate the two.
What really sticks in my craw about this mindset is that it's lowest-common denominator management. The spoken implication of metrics-driven management is to keep the organization focused on KPIs, but the unspoken implication is that employees can't be trusted. That is probably true for many employees, but not all. This lack of trust between management and employees is a big red flag for me, bigger than the hassle of micromanagement.
> the unspoken implication is that employees can't be trusted. That is probably true for many employees, but not all
There's an interesting reality here that managers hire and keep people in a state where they can't trust their direct reports. The manager is just as untrustworthy.
Peter Drucker was originally attributed with "what gets measured, gets managed". It does NOT imply that everything that can be measured matters, or that if you cannot measure it you cannot manage it. Really important distinctions.
"If you can't measure it, you can't manage it" is a quote by W. E. Deming. But Deming was fully aware that there are things you can't measure and must manage. He also emphasized that data could be used to mislead and that you must... well, use your brain. Deming thought of data as merely something to be collected in order to decide on an action, but before you can decide, you predict. I think this "predictive" potential is a good mental model; just as predicting the future is foolish, you can also reliably predict that the Earth will go around the Sun tomorrow.
Deming was (among many things) a professor of statistics, and knew that statistics can lie. Much of his work emphasizes the need for analytical skills and critical thinking. An example is PDSA (not PDCA), which requires study and evaluation of an action and theory in order to learn from it. https://deming.org/explore/pdsa/
The author seems to have come across the paraphrased version of the quote and I'm explaining what it's missing. I don't care if Deming or someone else said it first or if Deming was referencing it.
Came here to reference Deming myself. One thing I've noticed in Deming's work (especially 14 points, sicknesses) is that by contextualizing and properly understanding the use of statistics he humanizes people in an organization. We must look at metrics/statistics correctly, in a way that humanizes and enriches people, not in a way that turns them into numbers in a spreadsheet.
Numbers are comforting because they are "objective".
Take the law; if you drive on a nice day on an empty new highway doing 5-10 mph or Km/h over the limit and there's a speed trap, you get a pic of your car with a nice timestamp and speed and you get a ticket. You drive on a rainy day with traffic and make a couple tight passes, it would be up to a cop's judgement to ticket you for reckless driving, and it's subject to interpretation, even if the first case is "safe" and the second way less. Lots of examples with other figures that are encoded in law.
For people working in teams (thinking software teams for ex), I've been asked in interviews several times "how do you measure the team's success" or variants ("productivity" etc) and my canonical answer now is: a) metrics and numbers are OK to look at but can be deceiving, can be gamed and still need interpretation. Ultimately I look at b) are the objectives being fulfilled? (are we doing the work?) – which it's also kind of subjective because we can establish ambitious goals or sandbag – and c) are people happy?, basically asking them.
> Numbers are comforting because they are "objective".
Yes, your commentary basically nails it IMHO.
There's a lot of worship for "results driven" people and approaches, and in some places and times that's needed. The problems come, like you said, when the metrics get punked by folks with selfish motivations.
There's another facet to the problem and that's when there's a lack of information and an environment of chaos and change. What one thinks are "the facts" might not be the full story and acting blindly on that basis without knowledge of other unknown but more relevant facts will be MORE harmful than operating on instinct alone.
This burns operations folks all the time. They often straddle the line between "results-driven" or "process-driven" approaches. But there's another way that can be more helpful when things are more dynamic and complicated: what I call a "purpose-driven" approach.
All meetings I have been in for the last 15 - 20 years starts with "This is a high-level view". Which means "I am to lazy to analyze details". Spreadsheets and presentations feed in into this.
When I start showing details, people "turn off", no wonder many projects fail. When I started out in IT, almost all meetings were to examine details and people would work to tweak the resolution to make it better.
Spreadsheets like 123 started becoming mainstream in the late 80s, then slow slide to "High Level" started. Then the trend greatly accelerated when presentation software became mainstream. Before that, it was easier to talk details then to create a slide deck for an overhead viewer.
> Spreadsheets like 123 started becoming mainstream in the late 80s, then slow slide to "High Level" started. Then the trend greatly accelerated when presentation software became mainstream. Before that, it was easier to talk details then to create a slide deck for an overhead viewer.
I think that in some cases the absence of limits or costs can make things worse. Suddenly gets way cheaper to collect a bunch of numbers and analyze them? We'll no longer be careful and thoughtful about what we subject to that kind of analysis, because it's so cheap.
My go-to example of this is kanban boards and similar tools. They clearly lose something if you translate them from a physical board with physical limitations to a computerized "board" with practically no limitations. The computerized version is better from some points of view (who likes limitations?) but also so different that it's arguably worse in some ways, depending on what you want the board to accomplish.
Or more charitably: the artificial and unnecessary deadline that was placed on me by the same people who obsessively track efficiency and numbers didn't allow for time to analyze details.
> HR cannot be the only shepherds of “the people track,” or else it will just get ignored.
Herein lies the fundamental crux. The CISO cannot be the only shepherd of security, or else security will just get ignored. QA, Product, UX, Finance, Legal, cannot be the only shepherds of their domains, or else they will just get ignored.
The fact is that you will not find perfect candidates, where "perfect" means "well-versed in every corporate sub-field." That's not reasonable. What you can do is hire people with the humility to a) understand what they are not good at, b) actively invite the participation of those who are good at it, c) value, utilize, and celebrate their contributions. That's not an HR skill, that's a soft skill.
I think there is a distinction between "what I can't measure" and "what I am too lazy to measure." If something has a meaningful effect, that effect and thus the underlying factor must be measurable. Yeah there isn't some simple equation to turn "Bob tells lots of funny jokes" into a "team morale boost quotient" but it is still observable that on days when Bob's in the office everyone is more productive and there are fewer issues associated with low morale. But rather than starting with the things that are important - sales, productivity, complaints, etc - and determining what actually affects them, instead bad managers start with the measurements and see what affects they have on things that are important and assume that's all there is.
I feel like this article, and the discussion here, is conflating two different ideas.
- If I measure what I do, that helps me know I'm actually accomplishing what I hope to accomplish.
- If someone higher up in the organisation reads the measurements I have produced for myself, they get confused at best. In practise, the effect is often that they are actively mislead by the numbers.
The difference is local context. It's easy to improve work by measurement if you're close to the work and understand all the nuances of it.
Constructing measurements that can convey all the necessary context easily takes more time than just doing the job, sometimes multiples of it. (As can be seen by the proportionally growing body of administrators in any reasonably large corporation.)
This goes back to Deming, of course. The more fundamental difference is the presence of theory. Theory is what you get when you combine measurements with an appropriate mental model of the situation -- something front-line workers can do, but executives do not have the necessary experience for.
Numbers in the absence of theory (what executives have) are worse than useless: they're like driving on the highway by looking at the centre line in the rear view mirror. Numbers in the presence of theory (what front-line workers have) is the backbone of any good operation.
Something not directly mentioned in the article is how every layoff will create a huge social network of people who can refer you to their new employer.
Just about every person who got laid off will find another job of some kind, and if they liked working with you, they'll help get you in and give you a leg up compared to other candidates.
Perhaps the criticism I'd give to the article is that many of the things that "can't be put on a spreadsheet" actually can be put on a spreadsheet. A lot of employee feelings can be gathered from surveys and put into spreadsheets or metrics.
I don't really think that this "spreadsheet mentality" is necessarily doing what the author is accusing it of doing.
For a lot of businesses, the reality may be that having the most innovative, most happy workforce doesn't do enough to affect the one spreadsheet that matters, which is the balance sheet.
Let's say I laid off 100 people out of my 1000 person workforce. The median salary is $100,000. So, I just saved $10 million in payroll. Maybe those 100 people attract another 100 people to defect to other companies, which might be a good thing, since those are going to be my most disgruntled 100 employees no longer poisoning the well.
So now I've got to go out and hire some new people. These new people won't have any negative history or built-up animosity with my company.
This process might have some turmoil, but I've still saved $10 million in payroll, so my costs I've incurred to do some re-hiring and to account for my potentially slowed down pace of innovation have to exceed $10 million for this to be a bad deal.
I can even lose some customer revenue and come out ahead. Have you ever dropped a vendor solely because they had a layoff?
If my little made-up story is anywhere close to the truth, it probably shows you why we have the current work culture where employees basically need to switch jobs within 5 years to stay happy and well-compensated.
A lot of people are talking about the problem being that the people measure the wrong things, which I think is fair, but from my perspective sort of misses the point?
I would recommend anyone who hasn't to give a read through "Seeing Like a State
"[0].
You have to be incredibly careful when measuring things, it's tremendously easy for your measure to end up being a target in a way that gives you bad outcomes.
Doubly so if you are measuring something as part of a large organisation which may take your measurements as input to some process without any awareness of caveats they may have, or assumptions they were made under. Congratulations, you may now have a feedback loop.
Triply so if someone stands to benefit from your measurement, who will then have an incentive to distort the measure.
This isn't to say measurements aren't tremendously valuable or useful, they're a tool like any other.
I'm going to be even more cynical than you: being charming and easy to work with _to your superiors_ while being a prickly "expert" asshole to everybody "beneath you" seems to be the best way to move up into middle management.
That helps you convince your boss to advocate for you, but if he's ever asked to present hard evidence that justifies your value, he's going to need KPIs and metrics and all that other stuff.
... Also, being charming is entirely orthogonal to that.
The sweet inferno of following a process that will take you from one wiki page to the next, then coming back, with no clear checklist of completeness and no chance for you to validate ; rampant aconymisms with no explanations that assume global understanding of all the particularities of obscure systems as if you built them, and assume knowledge of poorly defined organizations and processes with unclear responsibilities. Any way to get help is hidden as deep as possible behind tickets that will never get processed and distribution lists that redirect you to channels where people just send you to the unfathomably bad doc you where following or redirect you to ticketing systems again.
Onboarding to an organization takes you through four different systems you need to onboard without explaining the order, require prior entry creation in another system that isn't documented or explained until, eventually, you fail and have to start the process again. It requires you to provide ids you can't figure whence to pull from. Onboarding to another system requires translation in a JSON file, but then asks to manually re-enter the same data in another web-app by manually pulling it out from JSON file. 4 entries × 37 different languages, with no way of automation. Onboarding to system A takes 8 weeks. Onboarding to process B takes 8 weeks from the PO approval. You need an IO number before you can start the process, then you need to figure how to author in the format they expect (they tell you the format but not how to author it). Then they'll ask questions that you, very honestly, need to be a lawyer to answer. You'll ask the lawyer, who's out of office and whose responder tells you to chat to A FUCKING BOT whose job is redirect you to incomplete documentation, which won't be able to answer your specific question anyhow. Might as well redirect you to a university's home page for you to start your own lawyer degree at this stage. If you miss a checkbox, you start anew and it's 8 weeks. Then you finally get the greenlight and want to toggle the feature flag on. It must be done progressively, in increasing %, environment by environment, with approval from some random team you don't need at each step. Your feature is separate and will target only 3 of your customers? Doesn't matter, have to follow the same process.
People who are supposed to help you are either too busy dealing with their own administrative nightmares, too bad or lazy to do so, or already burnt out and will let you do the effort. They'll be here to tell you you fucked up don't worry.
You keep getting automated alerts from 5 different systems (I'm not even exaggerating), for systems that more often than not don't belong to you. It takes an average 1 to 2 hours to understand these tickets, then you enter the binary tree of 1) finding the rightful owner, make them accept it's theirs, and once in a while even manage to get the thing reassign if such a process was ever considered in the first place 2) or realizing it is correct, and trying to figure what needs to be done through documentation that are as bad as can be, prioritize it (kidding, it's always top priority and needs to be done yesterday), and figure how to mark the thing as complete to get closure 3) find out that the alert is wrong or unjustified and start fighting the administration again to provide evidence, justification, gather the right approvals on form 22A from your skip-skip-skip-skip-skip-skip level manager, get it certified by the office of cries and complaints, which finally opens the door of the office of furniture and casual offerings on floor -12b, where you'll need to sacrifice a goat under moonshine while standing on your left arm. I spent 3 weeks telling a security "expert" that the alert asking me to add flags to gcc was a false positive, considering that our project is in .net.
Information comes from 4 different media (mail, chat, some random announcement website, and un-recorded meetings), and it is assumed you saw, know and will abide to the new policy change as soon as generic communication has been dropped in the sea of information.
Meanwhile you're still expected to do your job, except that your perception of said job is very different from what the organization is expecting from you. I just got escalated to my partner because I did not fill in the proper tracking fields in a ticket. Except I did. But I dared answer that person directly in the comment she left me within the ticket, while she was expecting me to come to an "office hours" meeting to tell her about this (it's more convenient to have all the people you expect stuff from in a room at once rather than having to deal with individuals, right?). So she didn't see it, and assumed I didn't do it. Disregard that half of these fields are irrelevant to our cases because we don't work on the codebase they're supposed to be tracking. People whose job description it is to help you will still not, because they found a loop-hole in an organization that doesn't make any sense, and no matter how much effort I'll spend trying to fix that will ever land.
Then there's this random dying system that everyone hates maintaining because it's so old it keeps triggering deprecation alerts, security alerts and so on. Every change is a refactoring and migration project. It's yours now, we decided. Code has been written by 16 different people who moved on to greener fields, nothing makes sense, 80% of code is deprecated but you don't dare deleting it because coverage is so low, and build somehow fails when you remove it. It would only take a month or two to go through the code base and make it sane, but you get denied that "budget" because it doesn't match any of the "OKRs" which solely focus on acquiring more and more paying users. The thing is used by a handful of people, and when ultimately, after months of moaning, you manage to get approval to deprecate the whole thing, then zombies get out of their graves and start trying to tank the project on the hotel that they wrote a line of code in it in 1997, and they'll see to it that this shining marvel of engineering will outlive them.
Death by thousand administrative cuts.
What makes me dream? Unity. Complete docs that tell you in one go what you need to do. A single alerting system. Empathy, people who actually listen to you and try to help. Processes that allow the individual to decide whether that applies to them, accountability if you fuck up.
It actually is, because for each step of this process you need to provide your tracking id and get it double signed by the it process improvement manager
> I spent 3 weeks telling a security "expert" that the alert asking me to add flags to gcc was a false positive, considering that our project is in .net.
Ah, I know this one --- you sacrificed the wrong breed of goat. Try importing an Arapawa next time.
The "spreadsheet mentality" is ignorant of the reality and values of intangibles.
The prime example is bringing the global team together once per year for team meetings and partying together. Such an initiative is priceless, as everyone knows who has ever witnessed one such event. People still talk about such events a decade later, but the accountants see them as waste and cut them.
In fact, giving accountants and lawyers too much power in an organization compared to engineers is one of the major mistakes that leads to the "spreadsheet mentality": as they are non-technical, and many general managers might be, too, organizations tend to not trust their own judgment, so they start bringing in external consultants. Guess what, cutting spending that leads to intangible benefits is the first thing they do, calling it "low hanging fruit" of the "transformation".
Before clicking on the link, i thought it'd be about "putting everything into an Excel spreadsheet" mentality, no matter how insane, redundant and inefficient is..
Ugh, how i hate receiving screenshots in Excel/Word/whathaveyou... and don't even get me started on stale information in multi-megabyte Excel files on some corporate network share...
Everything can be measured, you just need to try harder to measure it. And make more spreadsheets. Spreadsheets are awesome. The problem is never "too many spreadsheets" the problem is always "not enough spreadsheets" and "spreadsheets not good enough".
In theory, yes, everything can be measured but start measuring it and others will also start gaming it. Measurements are always just an approximation, even the good ones measuring one aspect of what you want but losing lots of context along the way.
No one has figured out how to really effectively measure developer productivity for example and the biggest companies in the world like Microsoft and Amazon don’t even try. I shudder to think that the answer is just “more spreadsheets”. If you can solve it go ahead, you’ll get rich.
I just realized that if you have js disabled Medium does the worst thing - it loads about a tweet worth of the article and all the side advertising but gives no indication that there is more article.
Text takes nearly no bandwidth - why would you ever do this.
Goodhart's law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure".
Like other posters have said, metrics are useful and convenient, but they are NOT the focus of the business. The business defines goals, defines measures to help support the goals, and re-evaluates as needed.
This reminds me of the ongoing push to eliminate testing in school, based on the belief that one can master a subject while being unable to answer questions about it.
I think that's a strawman; AIUI the argument is that rather that being able to answer questions about a subject appears to be a poor way to ensure that one has mastered it, and that excessively focusing on tests appears to undermine actual learning. (And now I must point out that I disagree with both claims and personally think it's more like "bad tests are bad, but that doesn't make all tests bad".)
The thing to understand about corporate capitalism is that executives don't work for companies; companies work for executives. The bosses want brag points so they can justify promotions, and that's all they care about. Subjective achievements don't matter, from this perspective. The health of the company certainly doesn't matter. Being able to say they grew a team from 40 to 150 people... and then being able to say they "saved $10 million per year" when they cut all those people... those "achievements" burnish their resumes and, therefore, matter.
There's also an element of toxic masculinity in play here. No, I am not saying that all masculinity is toxic (it's not, not at all) and I am certainly not saying men or maleness are toxic (I'm a man, and most people would read me as masculine). But you cannot understand corporate America if you don't understand the toxic masculinity on which it is based. Here it pertains to the division of labor between "male work" and "female work". The nice-to-haves and the subjective improvements are female work; the ugly work (usually, ugly because it hurts other people, as if "making tough decisions" were as onerous as being on the receiving end thereof) that actually drives a P&L is male work. (Does this distinction of labor as male or female map in any intrinsic way to the biological sexes, as opposed to the social construct of gender? I highly doubt it.) What we see in companies is that the work regarded as "male" grabs all the glory and accrues power; the work that can be construed of as the objective domination of others for the company's direct benefit is what gets rewarded and will, in time, be the only work that matters. The nice-to-haves don't; their contribution to the health of the company (or to the health of society, about which executives care even less) is too diffuse to be considered real.
For me the classic example was when all the doctors in the UK stopped letting people make appointments on any future date because they were being judged on how long patients have to wait for an appointment. So instead people had to repeatedly phone up first thing every morning. The war criminal Blair's government was very good at that sort of crap.
Well, your comment would be so much better without the last sentence calling Blair a war criminal.
More to the point: Had there been a secind metric measuring how many appointments were booked in advance, this way of playing the metric would have easily been caught.
Individual decision makers (i.e. doctors) have way more information than the metric makers, so they can run rings around them.
If the spreadsheet is going to be ungameable, it basically needs to be an AI that can do the job of the employees smarter than the employee. Or the employees need to be so unmotivated (or have motives so badly aligned) that a dumb drone telling them what to do will be an improvement.
No AI needed so. If, e.g. NHS, wants to make sure wait times and time to get an appointment improve you can monitor, e.g, wait times in practices (assuming you check in patients when they arrive), monitor when appointments are booked and when they happen... Then, if times are considered to be too long, you do a proper root cause analysis, check your processes, adjust exectations.
Just because people are usually unable to do all that, doesn't mean the spreadsheet is at fault or you need AI.
The sinplest way to judge if appointment lead times are good, you can compare patients prefered date against the one they got and the one they actually used. There are so many options to do it right, assuming someoje wants to measure things like this.
Andy Grove talks about this in his excellent book “High Output Management”…
Having a KPI show that you doubled your manufacturing throughout is only one side of the story. You may have also doubled your product failure rate, but aren’t measuring the latter.
I am not aware of him, or George W. for that matter, being tried in Den Hague so. Not defending what they did by any stretch, but facts matter. Should "we" measure ourselves by the same standards we apply to, say, Millosevic? Sure we should. We are not so, we even happily deal with the Saudis to get gas and oil.
If you actually look at the UNs definition of war crimes, he ABSOLUTELY didn't commit anything that could be called a war crime. So no, I don't agree with your assertion.
Imagine you work for the customers' support team: you unblock the client, the client is happy, and you also fix the public doc to unblock anyone after you… That’s all great. But at large organizations (namely: Google, Facebook, and LinkedIn) people ask: should we have a CS team at all? Isn’t that objective unscalable and trying to satisfy people a feature we shouldn’t have because every solution should be a product, not a process?
Unless you are working at companies where that question is legitimate, you don’t need KPIs formalized into a tree structure: you need objectives, clear goals, praises for unexpected great work, and support to make sure that people who are asked to do things can. The MECE tree is useful if you don’t want to miss something. For smaller structures, the gaps are a feature: you can’t do it all.