Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What McKinsey got wrong about developer productivity (leaddev.com)
81 points by RHSeeger on Oct 24, 2023 | hide | past | favorite | 116 comments


>the McKinsey framework only measures effort or output, not outcomes and impact, which misses half of the software developer lifecycle.

>“The McKinsey framework will most likely do far more harm than good to organizations – and to the engineering culture at companies. Such damage could take years to undo,” Orosz and Beck argue.

I would go further and say it will likely damage user experience too. Cranking out features to game productivity scores and ignoring outcomes is a good way to get bloat, bugs, cluttered and overly complex UI, and an overall degraded user experience.


I'm very much a hacker myself, but isn't this a little unfair? There is an entire column in the framework described as "Outcome focus" and the items seem to make sense on a first glance.

I'm not saying that the framework is good, but this specific criticism to it seems wrong.


The incentives for software development are somewhat broken.

In the "old" days, there was an infinite amount of basic software to write (Operating Systems, file editors, compilers, etc.) and very few developers working on extremely expensive equipment. They had to write software that would just work, features were added based on need, and the idea of a continuous, high-effort, software development activity for a piece of software wasn't the rule. Once the job was done, the developers simply got paid and moved on. Updating, patching, etc was hard to do.

So the incentive was to deliver high-quality working software that did what it needed to do for decades without there being a continuous development effort to keep it "updated" because you only got paid by moving on to new jobs.

Today, the theme is CI/CD, which translates into "figure out how to keep an overstaffed engineering department occupied long after they delivered the working system". This translates into shipping broken, buggy, incomplete, software with the intention to keep the department occupied over the long-term with bug fixes, patches, feature updates and releases, and probably the occasional all-up rewrite. The industry says that the ideal form of this is to not only push releases every so often but to do it continuously in a never ending unbroken stream of work. But what happens when the software really is "done"?

The answer is that we see it all the time. Often in various forms of enshittification (i.e. rent-seeking). Feature reductions, refactoring parts of a perfectly working system stack, changing languages.

The incentive is to vertically integrate rent-seeking, companies are incentivized to seek new monetization sources (I want to say as a way to grow their business, but often it's to keep paying a crushing labor pool), down to developers who aren't incentivized anymore to ship-and-move-on. We also see other bizarre symptoms like taxi companies writing scalable message passing services and geospatial systems, or social networking companies building state of the art AI systems.

So here's an idea, what if the way it should work is thus?

- Developers are payed by the job, they pre-negotiate the price and the payment cadence (monthly, milestone, etc.)

- Groups of developers could form an Agency if they want to centralize the negotiating and business bits.

- Companies hire developers or Agencies for the job. A certain level of bug fixes are included in the contract. (x-number of months, so many bugs, certain number of hours, etc.)

- Feature updates are new jobs.

- Terms to include: "so long as we continue to use the software on our systems, the developers will get payed residuals." Modeled on how other Agency-based industries like acting or music continues to pay-out for use of the systems.

- Good developers can end up on a big passive income stream, and could either continue working in the industry, or perhaps start spending part of or all of their time working on open-source projects that could provide reuse across projects within their industry.

- The Best developers could go into a company, negotiate a nice contract, and the job could mostly be to yank the code out of a repo, compile, configure and run -- then collect passive income for the next few years as the company gets value out of that simple exercise.

It's not the effort of the developer that bring value to the company, it's the value of the solution, so why doesn't the industry incentivize that?


> Today, the theme is CI/CD, which translates into "figure out how to keep an overstaffed engineering department occupied long after they delivered the working system". This translates into shipping broken, buggy, incomplete, software with the intention to keep the department occupied over the long-term with bug fixes, patches, feature updates and releases, and probably the occasional all-up rewrite

you are looking at it from the wrong end. This model is prefered and delivers better results because nobody was able to say what the "complete" sofware should look like. So better to deliver MVP and ask the users what they want to change rather than develop something for years only to discover that its very bad fit.


> nobody was able to say what the "complete" sofware should look like

We had "complete" software for decades. You can use an iterative development model until all of the major goals are achieved then ship a final product. After that it's just maintenance.

Iterative development has consumed the ship and maintenance parts of product delivery to the point that the industry no longer even can recognize or remember that such a thing can exist.

Enshittification is the result because if you keep going in and messing with a shipped product, the temptation to try to squeeze it for revenue when its market maxes out becomes irresistible. Modern development practices like CI/CD were invented to enable this behavior and really only build value during the earlier core development stages. After that it becomes an extractive process.

It's kind of like how when colonial powers were building railroads in India, or Korea, or wherever. The intention was to use the rail lines to extract resources from the colonized territory, but the side effect is that the local people in the country also suddenly had improved transportation infrastructure and oh yeah, jobs. For a while the people who work on or near the rail line "prosper" but nobody remembers that the goal was to suck all of the lifeblood out of the countryside and move it to the ports. Once the line is complete the model flips to extraction and the development activity ceases, and everybody who had previously prospered because they could use the rail line to get jobs, education and health care wonders why life sucks now.


> We had "complete" software for decades. You can use an iterative development model until all of the major goals are achieved then ship a final product. After that it's just maintenance.

well or its not, depends on the industry but most likely there will be new features etc. You need to be able to ship security features and bug fixes continously - e.g. without manual user action because most users simply don't have mental capacity to upgrade software. Alas continous delivery.


We had "complete" software in the same sense that we had "complete" technology when we discovered fire and caves. There is always something that can be improved or optimized further, although it may require rethinking the problem.


"then" or "than"? Both sentences plausible depending on perspective.


than sorry


> Today, the theme is CI/CD, which translates into "figure out how to keep an overstaffed engineering department occupied long after they delivered the working system". [...] with the intention to keep the department occupied

I disagree on this causal part: That malaise does not exist because Machiavellian engineers created a job-security program stealthy enough to persistently escape action from bean-counting executives.

Instead, it's caused because other business stakeholders are paying a premium in order to avoid deeper commitment of some sort.

A few examples:

* They want to avoid the risk of a big innovative product that may be a similarly-big flop.

* They can't bear to drop any customers by streamlining the product, always adding bells and whistles to appeal to the potential client du jour.

* They are unable/unwilling to spend/risk political capital in substantially altering how their own corporate structure operates.

* The company's leadership isn't capable of making different business-units work in harmony so the software becomes a slow battleground/mediator of their competing interests.


I actually don't disagree with you on any point. But I do think the relationship can become an unhealthy codependency that causes organizations and developers to lack the discipline that would be more ideal.


You are definitely reinventing something that already exists: Consulting. I worked as a consultant on projects in a model similar to the one you describe in the early 2000's.

This does work. It just requires a sales staff. And usually a 'bench' (the ability to pay developers in-between jobs).

And to be clear: it's only one way of approaching the current set of problems caused by well-intentioned management consultants.

FYI: The cult-classic movie "Office space" actually dealt with what the fallout from management consultants looks like (specifically with software development)


Do you continue to get residuals long after you've left a job?

In most agency-based industries, the workers don't get paid between gigs, only during. Good workers hunt around for good agents, and they can make massive amounts of money per job with the agents getting a handsome cut for their job working as sales staff.


I think the second that you need to do maintenance on the system, you've nullified the "as long as we continue to use the system" term. There is a strong argument to be made that it's a fundamentally different system after you make changes.


There already are contractors, who at least in my company are used to build new systems and bring in new ideas. So them and interns do all the ground breaking new work, while us regulars end up maintaining and extending legacy systems.


that sounds horrible, maintenance is the worst part of software dev


It's not the most exciting thing out there, but the hours are good and the pay covers the bills. In these times it's also rather safe place to be.

It could be more interesting, but then again, it's just work.


Why do you say that?


I think it depends a lot on the developer. But it _also_ depends on the company. "The work that isn't appreciated" is commonly the worst type of work; and maintenance work falls into the bucket in a lot of companies.


people are frequently quite bad at writing code to be read, so actually understanding what's going on can be a mess.


> CI/CD, which translates into "figure out how to keep an overstaffed engineering department occupied long after they delivered the working system".

CI is only about what happens before software is delivered.

And I don't see what CD has to do with staff size.

I think you're pointing out valid problems and misattributing their causes.


Making an example up that I don't know to be true, but please just go with it:

Google likely has a dedicated team who do nothing but tweak the front page of www.google.com

I bet there's a Kanban-like board somewhere within that team with tickets like "change the button for image search to new branding icon" and "move the "About" button to the left-most upper corner from the right-most upper corner." "Make the corners on the search box more round" or whatever.

Looking at the page now, somebody definitely had to generate work, and some poor front-end developer definitely had to perform the work to put the "Our third decade of climate action: join us".

Every couple of years the team will take on some kind of task like moving the front-page to a new internal framework, or connecting it to some new search/advertising API. Remember when google would show real-time search results as you typed?

"Let's put a Pacman game as the logo!"

Here's Google's front page from 10 years ago https://web.archive.org/web/20130101000532/http://www.google...

Somebody is making those changes, they exist. The decision to CD that page keeps them around.

However, there is effectively almost no perceptible difference in the program performance, features, capabilities, or really even the aesthetics of that page in that time period to the user.

It should be on maintenance mode with checking in on it being a part-time job for a single person. I bet that there's more than a dozen people who's sole job it is to keep that page "fresh".

That's how CD results in inflated team sizes -- it's the "C". CI/CD never makes it over the wall into CM (continuous maintenance) and as a result becomes it.


Well, looking into bookish definition of CI/CD will not help here.

In real world I regularly CI/CD pipelines reflects to delivery/product managers how fast features are churned out by team to be presented to business. A 10K ft view of CI/CD is often used by group leads, executives to show whats being achieved even at company level.


CI/CD is making sure your code is always in a production ready state and validating with tests and other tools automatically.


Okay. So?

JIRA is supposed to track work that is to be done. Now JIRA is used to generate work when none exist because if there are hundred JIRA ticket resolved this week there must have lot of work done.


You make valid observations but you conflate issues. Continuous development is not a problem per se; overdevelopment is. Companies need to be able to say when something is done, put on maintenance mode, and move on to something else. One problem that you did not mention is that business incentives can run counter to product demands; ticking more checkboxes just sells better with certain kinds of buyers. Many companies have not sown the seeds for new products to shift resources to; one trick ponies are the order of the day. So that would mean shedding staff, which looks bad for the company, and is bad for the workers' jobs :)


I don't really disagree with any of your points either.

> one trick ponies are the order of the day. So that would mean shedding staff, which looks bad

What I think I'm arguing is that most companies really don't need dedicated full-time staff in that way. Agency models work well in many industries that are built around one-trick ponies, which I'm arguing is what most development really is.

In those industries there really isn't a negative connotation to ending a project, shipping it, saying goodbye and thanks for the hard work to the team who did it. If they did a good job and the activity was successful, many of them will likely be hired back for the next job anyways.


Great points here.

With exception of perhaps top end of successful commercial products or some usable patents during development of technology most developers do not get any passive income no matter how much good work is delivered over decades. Everyone is just about as good as their next JIRA task completion. This all just means half-assed broken, bloated feature deliveries which is to be fixed in next scrum or whatever.


McKinsey's whole business model seems to be hiring smart inexperienced people to go consult with experienced people who know a lot more then they do so I'm not surprised that they yet again got something wrong.


Their whole business model is taking smart, malignant, experienced people and convincing senior leadership to outsource responsibility for something crucial but obscure (from the leaderships perspective) to them, then installing a lot of smart well spoken but inexperienced people into the organization in a giant bait and switch with the authority to drive strategy but the inability to craft a useful strategy. Over time they work to pit leaders against each other to drive a wedge they can fill with more people. After enough of this they can create a permanent dependency on McKinsey consultants until the board gets sick of the ineffectual leadership that’s been divided and conquered by McKinsey and hires McKinsey to devise a turn around, recommend new leadership, etc, as slowly the once proud enterprise devolves into a McKinsey zombie driven by inane process garbage rather than a vision. Somewhere along the way a private equity gets brought in by McKinsey and the transformation is complete.



You'll enjoy this new Last Week Tonight episode on McKinsey: https://www.youtube.com/watch?v=AiOUojVd6xQ


I think Good Work may have profiled the space better https://www.youtube.com/watch?v=vZE0j_WCRvI


And it is worth pointing out there is _some_ value to that. It gives a chance for implicit assumptions about how things work to be exposed, and allows the conscious choice on if to keep them.

Now whether McKinsey is actually delivering on that idea, or just exposing the implicit assumptions of "don't be an asshole and don't do things that are harmful to everybody else" and encouraging dropping them, is a problem.



Reputation laundering


I used to do this kind of work, not with McKinsey. When you do it a lot you realize how hard it is and why. Everything you want to measure, someone will claim it doesn't capture this and that, or will definitely be gamed and destroy the company. A lot like this article and discussion really. Many times picking metrics are the easiest part compared to getting the data, calculating the time series, setting targets, managing under/out performance. It's all kinda inevitable... so my advice is always to not focus too much on the metrics and focus on managing instead.

- Measure everything you your systems allow you to, maybe tweak some workflows, but don't add complexity or cost just for metrics.

- Separate metrics into stuff that needs to stay the same (mostly operational efficiency metrics) and stuff that needs to change (strategic metrics) and spend more time on the latter.

- Don't compensate or reward people based on them, just look at them a lot and if you see something that looks weird then talk to your managers about why, and if they can't answer or won't take action then that's all you need to know to reward them.

- If you want to use metrics to create competition just publish a leaderboard and let peer pressure do the rest. People will work hard to be at the top of a public list even if there is no explicit reward for it.

Just be a manager, manage the people, and use metrics that help you manage the people. You won't sell many consulting engagements that way tho.


Another article missing the more fundamental point:

You can't measure any productivity that has any cognitive component.

Devs think it's ok when sales productivity is measured like this, and then they don't understand that this is the reason sales are selling features that don't exist. We need to all stand against this madness. This goes for teachers, sales people, programmers, doctors, everything.


> You can't measure any productivity that has any cognitive component.

So you're saying that it's impossible to measure developer productivity at all, since it's cognitive work? Strong disagree.

All of us here that work with other developers know that some of their peers get more done than others. Or conversely, that there's a few in the team who don't seem to get much done. The minute you make even that simple assessment, you've measured productivity.

One of the jobs of eng managers is to then make this measurement as accurately, realistically, and fairly as possible. This guides bonuses and raises, which should reflect people's contributions.

I'm not defending McKinsey's proposal here, but I agree with the general notion that it's possible to measure productivity and this is something management should always be doing.


But not very precisely. I know who is a lot better than others and who are a lot worse, but the rest are all in the pretty large group that is somewhere in the middle.

Especially because there are also skill differences and the differences in job title and salary that come with that. So the level of difficulty of the work people get to do is also different, and so on.


Correct. It's brutally hard to measure with precision, in part because the work we do is so multi-dimensional. But it's not too difficult to divide people into at least three groups, of low, high, everyone-else.


Yet I've seen plenty of companies that get that completely wrong. They will fire the most productive member of the team without realising what they have done.

The problem is people realise they can gain as much from playing political games as they can from improving productivity. Once that starts everything goes to shit.


Peer review isn't measuring in this context. And often the value is hard to understand. I thought one guy on my team was clearly weaker than the rest... until I realized what he did that no one else did, which turns out freed up the rest of the team to do stuff we were great at.

"Better" isn't linear. It can multiply, and it can be highly dependent on the rest of the team.


> until I realized what he did that no one else did, which turns out freed up the rest of the team

Can you elaborate?


Even if you couldn't measure productivity exactly, it would be astonishing if you couldn't at least slap some kind of statistical distribution shape on it based off qualitative aspects of the work. Mathematicians have done it before.


It doesn't really work though. It's like measuring soccer based on goals or passes or some crazy shit. That's not how this works at all.


You’re saying there’s no conceivable way to discern which soccer players are better than others??

I don’t understand your position. Is it that all developers are equally productive, or that they do vary in productivity but it is immeasurable, an æther, with literally zero signals one could possibly mine to see how valuable a developer is to the team?


Team value is about bringing unique skills that can be a force multiplier. Measuring individual performance is antithetical to this very idea.

Soccer is a good metaphor because it's EASY to measure/see the value a good player that does not shoot a lot of goals make. In programming this is not the case.

It's certainly the case that managers that want hard numbers and don't know how to code won't be able to understand the situation at all. This is why this McKinsley nonsense is so dangerous.


Or like modeling basketball games as random walks?

https://arxiv.org/abs/1109.2825


> You can't measure any productivity that has any cognitive component

Understanding and measuring productivity can be valuable to a business. Sure they can't measure productivity for some roles as precisely, simply, and accurately as say counting the number of jelly beans in a jar.

But that doesn't mean productivity can't be measured in a way that is useful to the business. There's just tradeoffs involved.


Anecdata but: In my almost ten years of software engineering engineering, I’ve never once seen an organization deploy developer productivity metrics and do anything useful with them except bludgeon developers that management already don’t like with the number that came out of The Machine ©


Obligatory:

"Y.T.’s mom pulls up the new memo, checks the time, and starts reading it. The estimated reading time is 15.62 minutes. Later, when Marietta does her end-of-day statistical roundup, sitting in her private office at 9:00pm, she will see the name of each employee and next to it, the amount of time spent reading this memo, and her reaction, based on the time spent, will go something like this: Less than 10 min.: Time for an employee conference and possible attitude counseling.

10-14 min.: Keep an eye on this employee; may be developing slipshod attitude.

14-15.61 min.: Employee is an efficient worker, may sometimes miss important details.

Exactly 15.62 min.: Smartass. Needs attitude counseling.

15.63-16 min.: Asswipe. Not to be trusted.

16-18 min.: Employee is a methodical worker, may sometimes get hung up on minor details.

More than 18 min.: Check the security videotape, see just what this employee was up to (e.g., possible unauthorized restroom break).

Y.T.’s mom decides to spend between fourteen and fifteen minutes reading the memo. It’s better for younger workers to spend too long, to show that they’re careful, not cocky. It’s better for older workers to go a little fast, to show good management potential. She’s pushing forty. She scans through the memo, hitting the Page Down button at reasonably regular intervals, occasionally paging back up to pretend to reread some earlier section. The computer is going to notice all this. It approves of rereading. It’s a small thing, but over a decade or so this stuff really shows up on your work-habits summary."

--Neal Stephenson, Snow Crash


Thank you for introducing me to this!


In my 15 years of experience in software engineering, I've never found it either. I have, however, wasted 1000s of hours playing with poker cards and Fibonacci numbers instead of actually working. But I get paid for this, so, I guess: yay, end stage corpo capitalism!


I think this is a common misinterpretation that many people fall prey to. “It’s wasting my time so it must be bad.”

Your employers aren’t optimizing for your personal productivity and happiness. They don’t mind if you waste 1000s of hours with poker cards and Fibonacci numbers if their business goals are met. This can mean making one person upset and less productive if it makes progress towards a bigger goal.


How is blindly emulating some practices that you just read about in a Scrum guide gonna contribute to the business goals of the company? The perversion of the industry is that sleazy manager/business/coaching/scrum types weasel their way into software engineering and somehow want both respect, attention, and people following what they prescribe is "THE WAY".

Imagine if the tables were turned. Developers present in all sales meetings and asking each individual sales person how much they think the call to this client is going to take in terms of story points. And how about we also go to HR and start interrogating them how many story points it takes to give somebody their office key? Busywork might be fun, but companies can do with much less, and achieve the same, or more than they do now.

After 15 years in the industry and being both an IC, a manager, and currently a VP of Engineering, I can confidently state that the best strategy one can take to achieve their business goals is to leave the software engineering to the software engineers and step the heck out of their way.

Wasting the time of people actually doing the work by arbitrary rituals you read about in a blog somewhere is not a good strategy for success. If we must go bother someone and treat them like children and force them to play cards, let's do it with another department for a change.


> Imagine if the tables were turned. Developers present in all sales meetings and asking each individual sales person how much they think the call to this client is going to take in terms of story points. And how about we also go to HR and start interrogating them how many story points it takes to give somebody their office key? Busywork might be fun, but companies can do with much less, and achieve the same, or more than they do now.

I think we should do this! People should be able to explain their processes and where their effort goes with a decent amount of accuracy.

> Busywork might be fun, but companies can do with much less, and achieve the same, or more than they do now.

A dangerous assumption to apply to all companies. Also the trick is deciding what "much less" looks like. Better hope you pick the right things.

I'm not saying it can't be done. It's just hard. And in some cases, you'd rather just be 10% less productive (but still productive) by following a process you hate.


I like you.


But it doesn’t make progress towards the bigger goal. That’s their whole point.


All those companies failed to achieve their goals and went out of business?

Just because you don’t see the progress and don’t like the process doesn’t mean progress isn’t being made.


In business it's not about achieving your goals fastest, it's about not being the slowest. Just like running from a lion. This is why these businesses survive.

This is a common misunderstanding about market economies vs communism btw. In market economies there's a floor to the incompetence. In communism there is none. The ceiling isn't really different, but since there's a floor in one you will get a natural average higher in market economies. Pro-market people get this wrong all the time.


It's not that "devs think it's okay" - it's that we accept that sales thinks it's okay and we don't have the experience or judgement to say otherwise unless we've been in a sales org.

The fundamental point I think everyone misses is that as organizations get larger, the network cost of running is higher and nobody has solved that problem. Smaller problems are more quickly solvable and no amount of story points will change that.


> You can't measure any productivity that has any cognitive component.

Are you able to elaborate on why this is true please?


How many units of thinking did you perform today? How do you charge or how are you payed per thinking unit?


Obviously not.

The goal is to measure people's results. It might occasionally take a relatively-long time to find a seemingly-simple solution. But if an engineer /always/ takes a long time to find every solution, and upon inspection the problems were not actually difficult, then you most likely have a low-productivity engineer on your hands.


That sounds really subjective


Somewhat, but not entirely so. Managers can and should read git logs and bug tickets and design docs, and understand the architecture enough to have a reasonable sense of what the work entails. And managers should be engineers themselves and know that sometimes simple-looking work is actually very tricky… but at the same time it’s very unlikely that /every/ task looks like this.


That's not a good answer. We're not talking about thinking, we're talking about things with a cognitive component. Writing software has such a component and it's possible to tell if any software has been written (it's even sometimes possible to tell if it works or not).


> You can't measure any productivity that has any cognitive component.


I'm not sure what you're saying...


The irony of your joke took me a minute. Bravo.


How much profit did that thinking create at the end of the year (or whatever the horizon is)?


I never understood how I'm supposed to know that.

The company as a whole makes a profit, how do you know what it would have been if feature X wasn't implemented? And ten people had a role in deciding to implement it, four in implementing it.

How can I possibly put a number on my contribution?


Yup, it's a more or less worthless conversation, but one that many knowledge workers end up in.

I remember once being a similar conversation regarding a training program I was working on, and the customer was trying to assess the value of the training not on the improved performance of the employees, but on some measurable "knowledge unit" that had been transferred to the student (regardless of their ability to retain it).

It was beyond frustrating.


If you think your contributions to the economics of a business are impossible to understand, then you are in a dangerous position (your admittedly bad experience notwithstanding).


What I'm saying is that measuring output for knowledge workers in terms of some kind of unit other than business impact is not a great idea.


You can at least start from the top: the overall thing you are working on, how much did that make? Are features you worked on a contributing reasons for a sale/how much extra did they bring? Are you directly linked to some sales because you provided something specific? If you are doing internal software, could be cost reductions associated with that or also additional business.

Your number won't be exact, but it doesn't have to be and double counting among the other people isn't always bad.


I make frontends for our products (that we sell to governments), but they're not what drives sales. Customers by our product got features of the backend, they assume there is some frontend. If I do good work users are happy, but they don't make the buying decisions either.

That I work on the front ends was only partly my choice, we all move around depending on where we need people at that time, and other people decide what needs to be implemented.

If we were to fire all developers, we would probably lose hardly any sales the first year. But over time, licenses would go down more and more. So part of this year's profit is due to the work of people who left us years ago.

I don't see any point in claiming some specific part of our profit. People would laugh at me, probably.


You don't need claim, but you can at least associate with it. Even tracking happy users is a measure of sorts. It isn't an exact science and it doesn't need to be. And, yes, that isn't always fair to those gone from the company.


What if, as is usually the case, it created nothing until a whole team built the rest of the product that fits with my piece? What is the value provided by the front-end of a web app separately from the back-end?


You can double count to some extend/doesn't have to exactly proportion but you worked on something that in the end had an economic impact.


The "solution" to a problem can be to totally redefine it. Another guy might redefine it EVEN MORE. So you get three people:

1. solved the original problem, took 2 days and thousands of lines of code 2. changed the problem, took 20 minutes 50 lines of code 3. changed the problem in another way, took 2 days and thousands of lines but this solution solved TEN problems

Now, which one is "better"? It makes no sense.


In all your examples the solution came in 1-2 days. How about this common example? One guys merges one small change every week, of low complexity/difficulty by any measure. His teammate fixes several bugs every week, adds a feature or two also, and is on slack non-stop helping others out?

This is daily life in every tech company. It’s true that in some cases it’s hard to accurately rank between two people (because the work is multidimensional) but in many many cases the discrepancies in productivity are obvious.


No, case 2 took 20 minutes.


I mean you can.

We take exams at school for basically the same reason.

You have to set up an institution or structured lessons to get the higher levels of cognitive measurement.

A company can use an educated guess for cognitive ability. Usually made up from completed tasks.

Selling features that don't exist is a side-effect of the company culture.

It's easy to observe the business culture in the West is often well divorced from academic truth seeking. As a matter of fact and good business.


Tests in school don't require creative problem solving. We can know this to be true, because if it did then checking the results would require it as well, which would mean we'd need 10x the number of teachers.


> We can know this to be true, because if it did then checking the results would require it as well

This argument is blatantly false. (I mean the argument not the claim it's arguing for)

As just one counterexample, writing a mathematical proof often requires a lot of creative problem solving (it's not done a lot in school for this very reason). However, checking that a proof is correct is a rote task, even automatable (see proof asistants).


> Tests in school don't require creative problem solving

I’m not sure you can make such a broad statement.


Lots of delusion and gaslighting in this area.


> the McKinsey framework only measures effort or output, not outcomes and impact

Measuring outcome and impact gets messy when the developers aren’t the ones deciding what to work on.

I had a VP of Engineering who was all about measuring impact and ignoring everything else. He was also adamant that product people decided what we work on, while engineers decided how it was implemented.

This created an environment where your fate was determined almost entirely by what product team you got assigned to. If your product managers wanted you to implement a feature nobody wanted, it didn’t matter how well you did. You weren’t going to be able to show positive impact if nobody used it.

Impact is only a good measure when you let people make their own decisions.


Ugh. A quintessential engineering mind is always curious, critiquing and questioning everything and will always want to know the why of the what and ask why not an alternate what. Such a mind be motivated by a higher purpose – finding meaning in larger impact of their work. Only such minds can be creative and persistent to solve the hardest problems to create outstanding product experiences. All it takes to destroy such a mindset is to tell them to shut-up, not to ask so many questions, and to do as they are told.


I'm still mystified why companies hire a firm like McKinsey - they have no actual experience, but are telling people with experience what they should do.


For example, they advise executives to do what they already want to do (layoffs and increased executive compensation) and thus justify that decision (blame outsourcing).

A more generous take: when a business or project is stagnating due to bad management and communication, almost any advice that leads to change improves the situation. Especially when the situation is such due to some "principal actor" type issues, introducing more actors with different inscentives can rebalance things in the short term.


The most common one is that leadership wants to make a change but some people won't like it so it's easier to have a consulting firm work out the details, make the recommendation and then leave. Even when the outcome is predetermined, the details still matter, and McKinsey adds credibility.

Also, you can obtain more information about what your competitors are doing in that area, not like illegal levels of information, just, more than you have without them.

Make no mistake, these are all valuable services that large companies will pay a lot of money for.


The obvious point of comparison is security consultants. A security consultant knows a lot less about your product than your engineers do, but that's precisely why they're helpful - they bring an outside, generalist perspective that isn't bogged down by all the arguments you've accreted over the years about why X component has to be structured in Y way and there's no need for a security boundary around Z.


Probably because management can CYA by claiming to have done due diligence to their higher ups. "Well, we consulted McKinsey and took their advice, so if they got it wrong, then the industry is getting it wrong."

(These are the same guys that tell countries with successful agricultural sectors to embrace corporate farming. Cui bono?)


I had a manager once who reminded me that “no one ever got fired for following McKinsey’s advice”


For certain problems, it will be people having looked at the same problem many times/speaking to industry leaders frequently, the junior people drawing slides etc. are not why you'd hire them.


Don't forget "intel on if/how your competitors have solved them"


McKinsey's product is McKinsey services.

Imagine all the effort that goes into marketing, networking, and greasing the wheels of business that goes into selling this and you'll more readily appreciate why McKinsey keeps appearing in spots they have no business being involved in. Their product isn't a good or sound outcome, it's paying the fee to be able to append "McKinsey said..." onto a decision.


Same reason kings hired bishops.


Metrics are useful until you use them. If people know the metric, they will game them. Its better to track metrics to see symptoms of what is going on and then "walk the floor" to figure out why.



We are talking about the people who:

- were an instrumental part in creating Big Tobacco's strategy of "distract and create doubt, then delay, delay delay"

- applied the same playbook with Big Oil

- helped create the opioid epidemic

- give advice to multiple oppressive regimes (e.g Saudi/Russia)

Seems legit, let me take my engineering advice from these guys, they must know what they are talking about...


> And, in a world where agile methods have long taught us that you can’t improve what you can’t measure.

Edwards Deming, who in terms of process optimization, is (in my carefully considered opinion) head and shoulder above everyone I’ve ever encountered pushing related ideas in software development, challenges the idea of “you can’t manage what you can’t measure”. His belief was it’s far more important to understand the system in which a worker operates, after which you can absolutely make improvements affecting global output without a local measure.


Can you give a concrete example or elaborate a bit more?


Here’s the link the article is referencing, if you want to read the source https://www.mckinsey.com/industries/technology-media-and-tel...


Nobody should be surprised that management consultancies measure and manage the wrong things. That's the story of the last half a century. I've managed to avoid them for most of my career but if you ever want an inefficient, dehumanising work environment, go anywhere where the CTO listens to McKinsey and co.


> McKinsey argues that the objective of productivity measures is to incentivize developers to code more. This diminishes the idea that developers are creative workers.

As for the last sentence, they buried the lede.

And let's not forget, the incentive for McKinsey is to develop a product they can sell. How good that product is and it's "unintended consequences" are a distance second. McKinsey needs something billable, something upper management and leadership will buy into. Rarely is that the soft metrics. Many of us know what lack of appreciation for the soft metrics feels like as an employee. (Hint: it sucks.)


I put together some notes on applications for Redis 13 years ago which are still surprisingly relevant today: https://static.simonwillison.net/static/2010/redis-tutorial/

I think that speaks to how extremely well designed Redis was from the very start - it's been remarkably stable over time, and those original ideas are still very much applicable 13 years later.


Oops, this was meant to be a comment on another thread.


John Oliver published an episode about them recently. https://www.youtube.com/watch?v=AiOUojVd6xQ


is there more from McKinsey than this post[0] ?

(I did read a recent 40-page glossy PDF on effects of AI on job sectors. I thought it was slightly over-confident but done well enough)

[0] https://www.mckinsey.com/industries/technology-media-and-tel...


Don’t measure productivity. Measure ROI (Return on Investment). You are running a business after all. Cashflow is king!

Any productivity measure that isn’t derived from ROI is 100% guaranteed organic grass fed BS.

Increasing “productivity” can actually reduce global ROI.

I highly recommend looking into the Theory of Constraints.


Having worked with McKinsey people, they will forever be Lyle Lanley for me... https://yewtu.be/watch?v=ZDOI0cq6GZM





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: