The problem, as I discovered at promotion time, was that none of this was quantifiable. I couldn’t prove that anything I did had a positive impact on Google.
If they hornswoggled me into helping them, it’s evidence of their strong leadership qualities. I was just the mindless peon whose work was so irrelevant that it could be pre-empted at a moment’s notice.
Don't take this as an ad-homimen attack because it's not. You fell into the same trap some other people do at stack ranked organizations.
These systems create evolutionary pressure and the people who thrive are those who adapt to the ranking system as much as the job. Right, wrong or indifferent the ranking system is part of the job and in the same situation the people I see getting promoted would have dedicated significant time proving anything they did produced value or they wouldn't have done it.
That's why (IMHO) it was so difficult to get anyone interested in changing our stack ranked system to anything else; because those who thrived under it were in charge of deciding whether and how to change and already knew how to succeed in the old one.
It's actually not a cynical way to view your job. If what you do doesn't make an impact for customers and/or other teams in the organization, was it really worth doing?
I've consistently seen that successful engineers focus on work that has an impact, and they are good at making it known. The first part is obviously good, but a lot of engineers disdain self-promotion. However, self-promotion is not a bad thing if you really are good at your job, because it opens up opportunities to have an even greater impact, which is good for you and for the company.
The problem is that, at review time at Google, you have to be able to "quantify" the impact. Many types of impact are quantifiable (e.g. "Made server request scale from 100 query-per-second to 1,300 qps", "reduced code size by 30%", etc.).
It's much harder to measure, say, the impact of a refactor where you made the code easier to reason about and more maintainable, so that future work can be done on it more easily.
I witnessed the same thing at Google; I worked on a project that everyone joked only existed because the person who wrote it wanted promo, and the best way to get it was to design a very complex system, and convince others to adopt it. (He did get it, and promptly switched teams.)
Some things have been made better, though. I've heard that going from L4 → L5 now involves much more influence from your manager, since they would know and, without quantifying something like a refactor, can speak to the positive impact you had in a project.
I've also seen refactors that just made life difficult for everyone else with constant non-functional changes. In the end there is a lot of fashion in programming, and while some refactors are worthwhile most are not in my experience.
The refactor is supposed to provide payoff in the future, but what normally happens is fashion changes and someone new comes by and says "this code is shit" and starts the process over. The supposed benefits never accrue.
Measure commits/authors before and after a refactor.
Measuring number of commits? Create fewer, larger commits. Measuring commit size? Pull in more third-party libraries, even where it does't make sense. Author count? Add more/less documentation and recruit or inhibit new devs depending on what your goal is.
Not to mention the number of commits/authors before and after an arbitrary point in time might conflate a successful growing project with a project in a death spiral being passed around from group to group.
It's a good idea, but in practice simple metrics like this often (but not always) devolve into prime examples of Goodhart's law.
If you can’t find a measurable benefit to a refactoring (or anything else, really) then maybe it was not worth doing in the first place.
There is no programming project in existence with enough developers working on it, that developer-productivity data derived from a change to it would not be considered "underpowered" for the sake of proving anything.
A high impact infra change will often inconvenience dozens of people and distract from feature work... You know, the shit people actually care about... (this is analogous to how "Twitter, but written in Golang" appeals to approximately no one.)
And solve the halting problem while you're at it
Goodhart’s law is not applicable to scientific management because metrics have different purpose.
If what you are doing isn't making an impact, why is it being assigned to you in the first place?
Or, to put that another way: a worker is hired to do things. Their comparative advantage is in doing; time they spend in any other role than doing is labor that would have been better allocated/delegated to someone else, to clear them up for more doing. (It's very clear when you think of high-status "workers"—for example, surgeons. Time the surgeon spends inside the OR is worth tens of thousands of dollars; time they spend outside the OR isn't worth anything. If you can hire an administrative assistant for $30/hr to take admin-work off the shoulders of the surgeon, to ensure that the surgeon spends even one more hour inside the OR per week, then the admin assistant has paid for themselves.)
A manager, meanwhile, is hired to prioritize, delegate, lubricate channels of communication, ensure their team of workers has the resources it needs to "do", and then defend the workers from anything that would take away from their "doing" time. These tasks are the manager's wheelhouse; it's where their own comparative advantage lies. Time a worker spends managing themselves is company money wasted, because a manager would have been able to get that work done far more effectively, for less effort and time input.
You don't want a doctor spending time reading charts to triage patients (i.e. assigning work to themselves.) The intake nurse does that. And you don't want the intake nurse spending time trying to diagnose someone after taking their symptoms. That's what the doctors are for. Each role has their comparative advantage. Let the roles bleed together, and overhead goes up while lives saved goes down.
Why do "old" organizations like hospitals understand comparative advantage better than FAANG? Why isn't it seen as a failure of management to prioritize effectively when everybody isn't always doing the most important thing they could be doing at that moment?
There are a different set of disadvantages to this system, but I just wanted to point out that in Google’s case, management doesn’t assign anything, so an engineer working on something unimpactful is not seen as a management failure but a choice made by that engineer. It’s fine to do and you can keep your job forever doing work you believe in even if it’s not overtly “impactful,” but if you want to get promoted, it’s up to you to align yourself with the higher-level priorities.
Edit: That’s not to say Google never wastes highly-paid employees’ time (like your admin example), but the core difference is Google employees aren’t just hired to “do.” They’re also expected to spend a lot of time thinking, prioritizing, and organizing themselves, so that’s not wasted time.
The real problem for organizations is choosing what to optimize for, and how short sighted they can afford to be to survive. Someone below mentioned that the military in a war time environment is probably something that people who subscribe to this kind of thought would think is the ideal work place. The thing is if you look at these organizations in times of crisis, they may optimize for the quickest way to win the war, when it really should be to ensure long standing peace.
Management DOES do a lot to prioritize/delegate/lubricate channels of communication, etc. That is their primary role.
But you have more engineers than you have managers. You have more people providing a more varied set of insights into what should be prioritized, what general work needs to be done, what opportunities exist, etc.
To a certain extent, you have to be willing to let some people sometimes be working a little sub-optimally to allow the autonomy that results in making some of the crazy cool products and services these companies create. That autonomy being available is one of the big things that drew me to Amazon, and it's something I believe I've taken full advantage of to the company's benefit.
Giving people the latitude to run projects and create PoCs before getting full buy in from management allows people to be creative. But yes, if you've spent a year working on a project, you should be expected to show the results of what you've done. Some stuff is more ambitious, sure - but even when the results aren't immediately obvious, you do need to be able to explain the potential and what data backs up that potential and the ability for the project to reach it.
From my experiences over the past five years, being able to find opportunities for improvement is hugely important to the culture and the promotion process. You need to be able to identify these opportunities, and that generally means some sort of metric is available to work from, and measure against as you try to improve it. (Of course metrics and statistics can be gamed, but I'd argue it's not good to go through life assuming everyone is a bad actor). It's a skill that sometimes needs to be developed, and it's something I work with people on to help them with the process, but I think it's a very good thing in general. I've seen a lot of positive come out of it.
My two cents, anyway.
This ideal state is what people tend to be vaguely gesturing toward when they say they want "meritocracy." Even this state isn't really meritocratic, but it certainly has a management structure that knows what it wants and is actively steering every action of their subordinates to get the highest-ROI goals accomplished sooner than later; and where there is enough demand for skilled work that skilled workers are reserved to exclusively do the work they do best.
† The assumption (that seems to cash out at least somewhat) being that when two armies of equal size clash, the winner will be the one with a better leader-of-leaders, the one who has instilled a better management philosophy into their officers, such that the army as a whole ends up being run well. If there was some way to make entire departments of a company "fight", the way that two army battalions can engage in mock battle to determine their overall relative competence, capitalism might actually be able to succeed in evading the Peter Principle. Anyone have a good idea of how to do that?
That's so good i want to steal it for a chapter in my book
People aggressively going for promotion and as my team leader said "but he hasn't done any real work for 6 months".
I saw some one going for the first management level spend a million pounds of share holder money and 15 man years redeveloping a system in oracle - Because having a project worth > 1 million and with more than 10 staff was a tick box for promotion.
The resulting brinksmanship guaranteed his employees were ranked exactly the way he wanted. Ironic but he's now the head of a recruiting agency that purports to find top candidates via "big data" techniques yet he did his very best to subvert the system. So what exactly does he think he's datamining?
Assuming you're doing good work your boss should be your strongest advocate and trying to help get you recognized and promoted.
But if you're not inclined to self-promote then it definitely hurts and lots of folks on HN say they suffer from some level of impostor syndrome. If someone thinks "Should I really be here, am I really good enough?" they may also overly discount the value of their work and think there is nothing worthy of promotion in both senses of the word.