Meritocracy is actually a very damaging thing, the way we tend to implement it in tech. Without concrete and public metrics of "merit", it becomes a buzzword for reinforcing the biases (conscious or not) of the evaluators. Particularly in tech, those biases tend to favor white, upper middle class, males.
This kind of bias is demonstrated over and over in studies, even (and in some cases, especially) among people who are highly educated and even forewarned about the study objectives.
> Without concrete and public metrics of "merit", it becomes a buzzword for reinforcing the biases (conscious or not) of the evaluators.
I wish we could have discussions like this in the wider community without people going knee-jerk against the idea of it, itself.
I'd be willing to accept that a lot of companies here are nepotistic. I'd even be willing to accept that they cloak their nepotism in the rhetoric of meritocracy. But I have to draw the line at people opposing the idea itself. I have a hard time understanding how anyone could even hold that position. Don't you want the best people, at least in principle?
If people were more nuanced in these things we could hold discussions like "yes, this is a great ideal, but it gets corrupted. The problem is the corruption, not the ideal"
Your comment reminds me of a story I heard on NPR a while back, about an effort to reclaim the word "jihad". Basically, jihad is a concept of struggling or striving for something worthwhile but most Americans only hear the word coming from the mouths of horrible people.
I think the word meritocracy is in a similar situation. It's an interesting and useful concept, but the word tends to get thrown around by people you probably don't want to get associated with or confused with, so if you want to be heard and understood maybe try a different word.
> Don't you want the best people, at least in principle?
Maybe not. In many situations you want the best team, and the best team is not necessarily the team that has the most top flight individual contributors.
The best teams I've been on seem stronger than the sum of their individual members, and I've definitely been on teams I rate less highly that had some very strong individual contributors.
The best person form the job is the person who will make the team perform at its best. Which in IT would probably mean a technically savvy, creative person with good social skills and some domain knowledge. Those would be the merits upon which to build our meritocracy.
So you can be a productive coder or a good presenter or whatever but by themselves, these are incomplete metrics. If you happen to also be an a*hole, you are probably an overall liability.
An even more useless metric, though, is your specific flavour of sexuality or your skin colour. None of these count as qualifications in any sense, and if management if measuring these things I'd be wary of their sense of judgement.
Nowhere do I say that I don't care at all about the quality of my staff. In fact I'm often told my hiring process is pretty rigorous.
What I am claiming is that work is generally done by teams and optimising for high performing and highly capable teams is not the same as optimising for high performing and highly capable individuals (my understanding of what most people mean by 'meritocracy').
It's not merely word games. I've observed poorly performing teams built entirely of highly capable people. That kind of dynamic can hobble a company.
Don't just look to companies for examples -- look to sports.
There are professional sports franchises who go out and just throw money at "the best" players in their leagues. And the track record of doing that is pretty mixed; it turns out that just hiring a bunch of top individuals easily loses to putting together a group of players who are each objectively "worse" but whose play as a team is superior.
That's pretty rare! Almost all great teams have great players - and the teams that don't, usually have chronically underrated players, e.g. the Pistons with Ben Wallace - one of the greatest defenders ever.
HOWEVER, I will say that, rather than a great team, strategic / tactical innovation can cover for flaws. The Sydney Swans pioneered "flooding" and made a grand final with a sub-standard team. Next season though, the league caught up and the Swans did poorly. It wasn't the team or the players that got there, rather it was a tactical innovation, and that is usually short lived.
In similar ways, a coding change - new library, microservices etc can all be short term gains. Ultimately, though, when everyone starts using those tactics, what you want is the best people, fullstop.
That's pretty rare! Almost all great teams have great players
You're disagreeing with something I never said.
Imagine you're a baseball GM. You decide to build a winning roster by taking an unlimited amount of money, and then identifying the statistically best left fielder, the statistically best center fielder, the statistically best right fielder, and so on through all the field positions. You also identify the five statistically best starting pitchers, etc., and sign all of them.
There are franchises which try this "just sign a bunch of superstars, they have to win because they're so good" approach, and the track record of that approach is very, very mixed. But "just sign a bunch of superstars" is basically how tech companies claim they try to hire.
I can't create a comprehensive definition for it, but I can identify many components that are objective and purely technical in nature:
* Able to clearly communicate technical concepts. Evidenced by seeing displaying in wiring logical ordering of thought, separation of complex pieces into smaller, less complicated, and clearly delineated pieces, effective and actuate command of technical vocabulary.
* Able to code. Evidenced by watching them code.
* Familiarity with the data structures and algorithmic approaches native to the problem domain. Evidenced by discussion around that domain, perhaps a psuedo-code exercise with a relevant problem paired with discussion of design tradeoffs of different approaches.
* Understanding of the cross-cutting concerns related to maintainable software: testing, documentation, modularity, etc. Evidenced by Socratic discussion of said topics. "Given this problem common to sustainable software development, what would you do/have you done?"
I still believe in the value of meritocracy. Actually pursuing meritocracy solves a lot of the inclusivity problems we think we have. The problem is that people are inherently biased and unless we are purposeful in accounting for these biases it is easy to weave then into any system you design, no matter what the name or started goals.
Doesn't change the value of an actual meritocracy. Just highlights one of the challenges of being human.
Your definitely is not actually the definition of objectivity, though it is one way to be relatively confident that you are being objective, so I won't argue the semantics too much.
All of these can be objectively measured to a degree if you actually care to take the time:
* Logical ordering of thought: identify and diagram the main ideas in the text. Identify transitions in the text. Identify explicitly named connections between pieces. Multiple people can do this and expect to have a high degree of similarly in their results.
* Separation of components: similarly identify and diagram the components they list by name, the relationships the identify by name, the responsibilities they identify by name.
* Technical vocabulary: list all of the technical terms. Compare their usage against a dictionary.
* Ability to code: run their code. Does it complete and produce the expected output? This is absolutely objective. You can add further constraints and retain absolute objectivity: does it complete within a certain time, stay within a certain memory budget, stay within a certain cyclomatic complexity threshold, have a certain percentage of test coverage, etc.
* Familiarity with data structures and algorithms common to the problem domain: list the major constraints of the problem domain, list the data structures according to feature which addresses the constraints, similarly list algorithms. Compare to the candidate's answers. How many of the major concerns did they address? How many of the applicable data structures/algorithms did they know? Did they volunteer anything new and were they able to explain how it addressed the problem constraints?
* Understanding of the cross-cutting concerns. This could almost be a checklist. I would make it a little more involved. As a mentioned, Q&A, see what solutions they present, but to have a quantifiable metric we can identify major components and identify the major concerns each of those addresses, see how many the candidate reached, give bonus points for value concerns they addressed that we didn't.
I'm sure if I spent more time I could expand both of these lists.
I will concede that this is still subjective in many ways, especially in the interviewers choices of what is "correct" ( what are the problem constraints, etc.) and what parts of the answers after important.
In that regard I will concede to you that there is an ultimately subjective nature to most of this, because deciding what is valuable has an element of subjectivity, but that is going to be true of pretty much any pursuit outside of pure mathematics (and I'm not convinced we have entirely objective values there either). However, once we have decided what we value it's possible to eliminate a lot of the subjectivity from measuring it. In most interview processes it's not a lack of ability to be objective, it's a lack of concern about being objective.
And actually, I'm not too bothered by that. A healthy meritocracy does not require absolute objectivity. What it requires is an explicit statement of what the values are and a transparent means of evaluating people against those values, and but according to any other values. The values can be subjective. The evaluation can be subjective. As long as the values are known and the evaluation process is transparent it can function as intended. Even better, by clearly communicating the values of the system you send a strong signal to others so the can determine if your organization is something they want to be a part of.
Objectivity is a good tool to help maintain that transparency. But I'm not worried so much about the subjectivity of it as I am hidden values and opaque evaluations tied to things that should be irrelevant according to the stated values.
Defining "best" need not imply an objective definition in the same way that describing the "best" database architecture for a given set of requirements isn't entirely objective: "our programmers like to work with SQL more than MongoDB" is a subjective but sufficient argument to tip the scales.
Defining the "best people" is _obviously_ subjective. _People_ are subjective. There isn't just one "best"-- there is a set of "bests" that you can strive for. Just like the above example, it depends on your requirements, your priorities, etc.-- but most importantly, it doesn't need to be objective to work well, which brings us full circle to:
> "Best people" can mean the best team.
If you prioritize teamwork among individual contributors, this is what best people would imply.
The awesome part about a capitalist system is that companies have the freedom to experiment with these configurations of how they define "best". GitHub may define it differently from you, but that doesn't make their definition less valid.
Meritocracy is an idea, not a specification-- there is no one true meritocracy implementation. The discussion needs to start from there.
> Meritocracy is an idea, not a specification-- there is no one true meritocracy implementation. The discussion needs to start from there.
Im not convinced it does. If you want to say meritocracy says merely that we should try to hire the best people all things considered then no-one would disagree. The disagreement is precisely about which things it's appropriate to consider.
Typically meritocratic systems in practice make the assumption that it is possible to determine merit outside the context of a specific team. I think this assumption is highly suspect. Merit is not a fixed characteristic of the individual but rather an emergent property of them in their context and in relationship with those around them.
Not "an", as in singular measure, no. But what about several? Is there a single metric for "heathy"? Someone can be OK in almost all ways but have a broken leg. Are they "healthy" by a single metric? What about diabetes that is managed? Can you think of any field in life where there is a singular metric for performance? If not, why does the non-existence of a singular metric in tech invalidate the idea?
And what about in reverse? What if, rather than finding the "best", we merely have a metric/s that weed out the worst? If I remove the bottom 15% effectively, and replace them with average performers, then the net gain is massive, especially as each extra bug introduced is a massive time sink for any team, and poor developers are a massive cause of that.
I dunno - there's a fair case to be made that Kante was the essential lynchpin which dragged the rest of them upwards. Especially since we can check this in the subsequent season when he joined a different team (who also won the league whilst Leicester languished mid-table.)
"Chelsea were so happy with N'Golo Kante that they sent Leicester flowers to say thank you for selling him to them."
Fun fact: The original intent of the word `meritocracy` was satirical so it's connotations were intended to be closer to those of the OP than those which current defenders of the terms ascribe to it.
If the idea sounds great and is arguably great on all accounts, but in practice proves to not work, time and time again, perhaps the idea needs to be parked until the environment is fixed. Otherwise, arguing about it becomes a distraction while, in its corrupted form, the idea actively damages the things it should be improving.
So you just say "don't do <<idea>>" and, rather than expand and qualify the statement with a paragraph like the above, you just move on to the actual topic you want to focus on.
> So you just say "don't do <<idea>>" and, rather than expand and qualify the statement with a paragraph like the above, you just move on to the actual topic you want to focus on.
That's a terrible plan because it blanket dismisses a rational and widely accepted idea without explaining why or even forcing you to think about it.
How about you at least take the courtesy to explain why you're dismissing something that at face value provides a better solution than what you're suggesting. Even with its problems, you need to explain why your suggested solution is better than meritocracy.
I personally at least am yet to see a better alternative to meritocracy, despite its definite problems. In my opinion all proposed alternatives seem to introduce more unfairness and problems of their own.
Ok so apparently my attempt to offer an alternative pov for people who I believed did not grasp the original, is getting me some downvotes. Let me just link to what she has said about meritocracy in the context of her Code of Conduct:
"Marginalized people also suffer some of the unintended consequences of dogmatic insistence on meritocratic principles of governance. Studies have shown that organizational cultures that value meritocracy often result in greater inequality. People with "merit" are often excused for their bad behavior in public spaces based on the value of their technical contributions. Meritocracy also naively assumes a level playing field, in which everyone has access to the same resources, free time, and common life experiences to draw upon. These factors and more make contributing to open source a daunting prospect for many people, especially women and other underrepresented people. (For more critical analysis of meritocracy, refer to this entry on the Geek Feminism wiki.)
An easy way to begin addressing this problem is to be overt in our openness, welcoming all people to contribute, and pledging in return to value them as human beings and to foster an atmosphere of kindness, cooperation, and understanding."
AFAIK the word "merit" doesn't appear at all in the actual Code of Conduct.
But what's her alternative proposal - that we say, accept pull requests from someone because of their race or sex without critiquing at all? Based on the way she responded to some stuff in this job... maybe that's actually what she wants, but it's not what I want and it sounds like a terrible idea in general.
Meritocracy is still the best we have. It may be flawed, yes, but there exists no superior alternative. It's likely possible to get away with a few minor tweaks - but there doesn't seem to be anyone looking into what exactly those could be, instead they shit on the concept without providing any viable alternative.
It seems like many of the things she and others mention are not necessarily bias in individuals in the workplace, but "resources, free time, life experiences" - which seem much easier to attack and if done fully I think could help make up for other biases too. I think the best bet honestly is stuff like Black Girls Code where they try to get people up to speed in order to compete successfully by merit.
Being a "meritocracy" doesn't mean that you have to reject pull reqests until the author gets it perfect. For someone who's new, you can instead have someone with more experience with the project fix it up as an example, and for the second give some advice but fix it up for the author if the author seems stuck, and for the next one, ...
+1. Parent comment's argument is not great because, among other things, the exact same argument could be made right back at them.
Amongst almost everybody I know, "meritocracy" still means it's dictionary definition. If the definition is contested, I don't understand why other peoples' definitions of it take priority over the official one
So fix the implementation. Rather than say "well there's not a lot of brown people contributing to this JS package so we need to get some more" why not fix the way the mentioned biases (which I agree 100% exist) are affecting how merit is decided?
The idea that the best way to fix biases in favor of wealthy white men is to add new biases against wealthy white men is crazy.
What if the biases are deep-rooted and subtle and difficult to address without destroying productivity with endless ceremony? What if the biases exist beyond that single organization's control, but still affect them? If the waves are pushing you in one direction, pointing your bow slightly in the opposite direction might really be the right thing to do.
> The idea that the best way...
Few think it's the best way. The problem is, eliminating systemic bias will take a long time. During that time, new victims of that bias will continue to be created. If the introduction of a contrary bias helps more people than it harms, is that really so crazy?
* "I implemented feature X, which increased CTR by Y% thus increasing revenue by Z"
* "New compression scheme reduces bandwidth usage by this much, allowing team B to implement their new feature without worrying about badnwidth usage too much"
* "Team C, who uses our library, needed urgent help investigating a performance issue. I dove in and found that the interface we were providing them didn't allow the most efficient usage; designed, tested and deployed an alternative, which resulted in team C being satisfied with performance"
I could keep going with these examples (I'm paraphrasing these from some actual work my colleagues did). My point is, it's pretty easy to measure merit in earned dollars, shipped features, fixed bugs, saved engineering hours, and resource usage. Those metrics are concrete and public.
A bug might be as simple as a single character fix on a printed string, or as complex as performance isn't as good as we expected, so profile and rewrite parts of the entire application to get acceptable performance. Both count as a single unit in your "bugs fixed" metric.
Or do we have meetings to play poker and assign points for bugs?
Unless you're fixing tens of thousands of bugs I don't think you're going to have a good sample size to judge the output of 2 people based on just how many bugs they've closed.
This can also be gamed ie. pick up easier bugs to appear more productive, open bugs for small issues you notice yourself and fix, and this has the byproduct that real work never gets done.
Rewrites, infrastructure, code reviewers, mentoring. No earned dollars, no "features" shipped, good luck measuring "saved engineering hours".
There are no objective measures of productivity in the majority of cases for tech workers.
Infrastructure is a feature in and of itself. Besides, doing things like improving a build system to reduce build times, or streamlining code review workflow has clear measurable impact.
Every rewrite must have an observable measurable impact, otherwise it is simply not worth doing.
Your mentees' performance is an excellent proxy to measure your quality as a mentor.
Again, all of these can be assessed without much hand-waving.
Code reviews shouldn't even count towards your performance. It's just something that you have to do. (though arguably, if you have to do a lot of code reviews, then it's a clear signal that you're a valuable person on the team who knows a lot of detail about the system).
> There are no objective measures of productivity in the majority of cases for tech workers.
I think there clearly are, and I just listed some of them. Sometimes they're hard to boil down to a single number, but in most cases you can easily tell who's doing meaningful work.
I'm not sure that's "easy to measure" since how do you know how many hours have been saved without doing it the "slow" way first?
> earned dollars
Someone who isn't fixing a lot of bugs or implementing flashy new features but is providing good mentoring to the team, writing onboarding documentation, helping them understand the large-scale ramifications of their changes, etc., is essentially invisible to "metrics" yet providing a vital role.
> I'm not sure that's "easy to measure" since how do you know how many hours have been saved without doing it the "slow" way first?
Like I said, not always easy to boil down to a concrete number, but you can always find good proxies. Fixed a problem that caused service to trigger alarms in the middle of the night? Saved engineering hours. Wrote a library that several teams use? Saved engineering hours. Even your example, writing documentation, saves engineering hours.
> Someone who isn't fixing a lot of bugs [...]
How would you be able to do any of that if you're not doing meaningful work on the system?
I got a new computer. Where news.ycombinator.com hasn't been set to 127.0.0.1 in /etc/hosts yet. So I wanted to check what changed in the last year or so...
And your post reminded me why I stopped visiting here. Thanks.
Yet you took the time out of your day to even login and leave us with your wisdom. Thank you, from the bottom of my heart, for this invaluable contribution.
Perhaps - but the goal is to remove that kind of thing, which is in my opinion at least a noble goal and should be respected even if it's hard to implement properly. My point was not that it's perfect, but that successful meritocracy is something to strive for.
Given that the only alternatives that I know of are seniority (age of employment before contributions) or favoritism (find the right people to move you forward), what alternative is there that won't enforce a bias? At the end of the day, businesses are driven by people, and people are emotional.
If you're curious, consider doing research on the subject rather than asking people to re-litigate the whole thing from first principles every time the topic comes up. It gets exhausting because most people operate from a position of "a belief that there is no significant bias/significant effect from bias is the correct default assumption unless/until someone demonstrates otherwise through overwhelming evidence". And each individual person expects the whole thing to be proven for them from scratch each time.
Instead you could do some research on your own and find the information that's out there.
My intention with this question was to probe if there was anyone that has experience within this field that might have any milestone studies/papers on hand, or something that they can cite from memory. The reason for this is that when you venture into a new field of study it usually takes time sorting out the wheat from the chaff. Now off course I can do the research on my own, it was simply a question I asked to save time.
To assume that I have some sort of hidden agenda behind this question is rather paranoid from my perspective (and came as a surprise), as you didn't know anything about my intentions.
The thing is, at this point saying that there's discrimination and systemic bias should be about as controversial as saying that the earth orbits the sun. It's not something that should be responded to with a demand for sources, and the fact that it always is, and always devolves into people trying to shift the argument to whether there's even a problem at all (regardless of any one individual's reason for starting such a conversation, that's where the conversation inevitably ends up), is just ludicrous.
People who are unaware of the existence of the problem can use a search engine and read up on it.
This is a very presumptive attitude to take which is not going to convince people on the fence on this topic who don't already agree with you.
I suggest that if you want to change minds and improve the status quo, you should engage these people who you find tiresome anyway. Not to persuade them, but the third-parties who will read the discussion and could be persuaded. Or, if that's too much work, simply don't engage, if only so you don't sabotage someone else's effort to persuade.
I can tell you right now, though, the people who will keep demanding sources and want to re-litigate even the existence of discrimination/bias, in every single thread which mentions the topic, will not be convinced by providing them walls of links and sources. They've already made up their minds, and the only thing they'd do in response is exactly what I said: nitpicks and non sequiturs and "well, I don't find that convincing..." and so what's the point? If someone is genuinely and truly unaware, they can use Google. If someone just wants to try to discredit a basic established fact about the world, it's not my job to coddle them or make them feel good about it or "engage" with them or make them feel that they were properly listened to and had their concerns addressed, any more than it would be my job to do that for someone who denies evolution.
Right, it's not your job to do anything. It is perfectly valid for you to feel frustrated in exactly that way.
All I am saying, is that their are third parties who you are not interacting with, who could be persuadeable, who will read that frustration and are going to find it alienating rather than persuasive, and therefore you end up creating more people in the world who think there is no real problem.
It may be more constructive to simply disengage if you feel that exasperated by it. Both for persuading other people and for your own sanity. That's all I am saying.
Imagine that you live in a world where there is a large, extremely loud (larger and louder than the actual world) population of young-Earth creationists.
Now, imagine that every time you say something which even tangentially mentions evolution -- let alone something where the main topic is evolution -- some of those people immediately pop up with "got sources for that?" / "gonna need a source on that" / "citation needed for that claim" / etc.
And imagine that for a while you did go to the trouble of linking up primers on the topic, but every time you did that, they just responded with non sequiturs and attempts to nitpick little details of the primers and parlay that into discrediting the entire idea of evolution.
Now, imagine you've been living in that world, every day, for years. You might well finally decide "you know what, it's not my job to pause every single time I post a comment online and have to re-prove the theory of evolution to anyone and everyone who demands it; evolution is a basic fact we shouldn't have to debate at this point, and people who genuinely want an intro to it for some reason can find one on their own".
Now, imagine that if you do make that decision, you'll be branded an asshole for "complaining" instead of just posting a link. You'll be told that these folks are "just asking for sources". Or any of a large number of other explanations which don't jive with what you see day in and day out, but if you try to explain that you'll be told you're projecting, or making it up, or arguing with a strawman, and this is a sign that you are not trustworthy (which in turn just reflects back on the theory of evolution -- after all, if this is the kind of person who stands up as its representative...).
Imagine all of that, but change the topic from the theory of evolution... to the topic of this thread. And imagine how tired everyone is of the "got a source for that?" brigade. Regardless of whether the person asking has the noblest purest intentions in the history of noble purity, we're talking about basic stuff about the society we live in and the industry we work in, and if someone is genuinely unaware of it and genuinely curious, they can use Google on their own.
It's true that Meritocracy alone overlooks the differences in opportunity people have starting out. Still, I think the criticism is somewhat misguided.
I like to think about Neurosurgery to illustrate: if you need a tumor removed from your brain, would you rather have the surgeon be a privileged, elite surgeon from Harvard, or some random dude from the streets?
We should be happy that we are able to produce elite Neurosurgeons, and strive to give more people the opportunity (including random dudes from the street). Attacking elite Neurosurgeons is completely counterproductive.
In a world where i know nurses often perform doctors dutys and jobs, while new doctors stumble around clueless.
Knowledge on adjunct fields and skills can accumulate in a person over time, thus allowing them to perform similar feets.
Like, a bright pupil might learn from a master, if it is not intended. It has happend- since medieval times- apprenticeships, although a attack on caste-think can happen today. Yes, they did not jump through the money hoops, that are suppossed to keep them away- but some of them studied what happens around them. Those servant peoples.. they might not be automatons.
If it had a drop-out quota similar to comp sci or math, i might actually consider your elitism a valid opinion.
Post all the meritocracy nonsense you need to post, but deep down you know, if you had the money and a mediocre kid- you would help him through and put additional money hoops up behind him to jump through.
The irony, that all the others doing the same thing, have you lay on a metal table, beeing cut open by maybee not the best person for the job, it never reaches escape velocity.
This kind of bias is demonstrated over and over in studies, even (and in some cases, especially) among people who are highly educated and even forewarned about the study objectives.