At <FAANG mega-corp>, I used to work on an internal team which basically did nothing important, certainly nothing users ever think about.
The problem that we were trying to solve, well, we had already solved it about 3 years ago, yet the team was still expanding, and our product managers kept spitting out new (mostly useless) project ideas.
One of the main areas of focus was metrics. Leadership was obsessed with metrics, we measured everything. Not just plain empirical metrics (like # clicks), but also very complex metrics, theoretical models that we had data scientists work on, etc. At some point maybe 25% of the team worked on metrics related stuff, it was crazy.
Why were they doing this? Was it only to keep us busy, so that the team can continue to grow? Or maybe they were desperately trying to find metrics by which they could prove the team was actually still doing meaningful work? I don't know.
At some point I left, and moved on to a much smaller team. This team was working on a brand new product, one that customers actually pay money for!
One of the first things I said to my new product manager was "so, what metrics do you care about? how do you measure success?".
He was taken aback by my question. He seemed genuinely perplexed. He paused for a few seconds, then said "metrics? What do you mean by 'metrics'? I look at revenue, when the line goes up I'm happy".
There is a happy middle ground somewhere between those two :)
That revenue metric is a trailing indicator. Metrics that give insight on whether users are having a good experience can lead revenue metrics significantly. Also this may be orthogonal to what you're talking about, but operational metrics are useful to understand costs (eg. if every new feature increases memory usage, eventually that will have a cost impact) and avoid volatility (eg. operational metrics often presage outages).
> Metrics that give insight on whether users are having a good experience can lead revenue metrics significantly.
Oh, yeah. If you are able to discover those metrics, there's a billion-sized market opportunity to apply them. You will certainly be able to get a share of those.
We have some partial proxies, that stop working almost as soon as you start to make decisions based on them, and a huge bunch of snake oil. But the metrics you are looking for do not actually exist.
Sure they do, they're just very product-specific and an unsexy slog for the reasons you're highlighting. It is definitely a valid optimization to not even try, and just stick to very granular trailing metrics. But that doesn't mean there is never any way to look at how people are using a product over time and draw valid conclusions that can be used as input into good decisions.
> way to look at how people are using a product over time and draw valid conclusions
Well, that exists. Metrics that summarize those conclusions is what don't exist.
Metrics are a different thing from information. You can discover if people like your product, it just takes effort and isn't accessible to people without any knowledge on the subject.
Say the second metric shows that you have a meaningful subscription drop-off after 18 months. By comparing that metric with the signup metric, you might be able to successfully foresee a revenue drop-off months before it materializes in the revenue metric alone, buying valuable time to figure out what to do.
Tracking a revenue metric alone always runs the risk of surprising you in real time when it ticks down and you have no idea why.
Ah I see. Yep that other comment I made was uselessly hand-wavy.
The point I was trying to make, starting with my initial comment in the thread, is that there is a middle ground between "we're spending way too much time devising and implementing really fancy fine grained metrics" and "we only look at revenue".
My concrete example makes this point much better, and it seems like we're in agreement, so carry on :)
Heh, a more dismal view of why metrics that lead revenue are being tracked with such fervor, it's so the internal stockholders can either purchase more, or file with the SEC to sell off in the required timeframe before it's too late.
I mean, internal metrics also let them change something. If it's able to predict earnings in a few months, surely changing whatever is driving those metrics has value.
trading on inside information does move the stock market price in the direction it should be moving and makes the market more rational and efficient so portfolio theory buyers and sellers of the stock make better decisions.
One of my consulting customers came to their account team a couple of years ago and said "We want help setting up a cluster, and specifically we want metrics and logging sorted out" - the account team brought this to me, so I said "OK no problem, what are you hoping to monitor, and what decisions will you be making based on this data?"
Dear reader, you may be unsurprised to learn that they had no fucking clue what they were supposed to be monitoring, or what they would do with the data. I declined the project.
Isn’t this a case where the stock answer (RED / Golden 4 metrics) plus a link to the appropriate chapter of the SRE book is all you need?
If you wire up a simple Datadog / Honeycomb dashboard with those, plus log exploring, you have added lots of value that will definitely be used, for a team that clearly has no idea why they need o11y.
Metrics are great if they measure the correct thing and are part ofba properly defined metrics system, and not just a dashboard. E.g., in material flow, every metric should measure onr process / dub-flow performance and output, as this outpit is used downstream. Special emphasis on interfaces between teams and departments. Ideally, those low level, high detail metrics are consolidated into a few high level ones. This makes it easier to measure overall performance, and do diagnose root causes.
Most, well almost all, metric systems I encountered in my life so failed misserably at that.
"The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide."
https://en.wikipedia.org/wiki/McNamara_fallacy
I forget where I found the quote, but the older I get the more I apply it: "Don't expect people to change their minds, expect them to lose."
As a corollary, everyone knows they should change their minds based on new information, but shockingly few actually do or they only do it after it's too late. So if you find yourself working with/for someone who consistently and successfully changes their perspective based on new information, heavily weight keeping that person(s) around in your future plans.
One approach that I think is really valuable in the face of this is helping people to lose ASAP while there's still time to change.
I think that was one of the key insights that united the early Agile people, the ones who pioneered it before it turned into a certification/consulting scam. Releasing early and often to actual users enables a level of discipline and humility that's hard to achieve otherwise. I think this was taken further by the Lean Startup folks, where you were supposed to be explicit about your hypotheses and then construct tests to validate/invalidate them. E.g.: https://rulez.io/wp-content/uploads/2019/05/validation-board...
I'm sorry it never caught on widely, but it has stuck with me. On any project I'm on, I structure the coding work such that as early as possible we can see if we are having the impact we aim for. That inevitably sucks early on as we put barely-adequate things in front of people and frequently get negative responses. But it really pays off over time, as you get to kill bad ideas early and use the savings to explore real solutions.
Sadly, I don't think this approach scales, at least with current management culture. In the short term, managers and execs benefit a lot more from seeming right than from being wrong in ways that lead to them eventually being right.
Metrics are a must to understand complex systems, as you can’t just comprehend them. Like economy or complex business.
Without understanding, you will be making decisions based on gut feeling, which is ok if you are Jobs, and not ok if you don’t have a magical vision for the product that will be a success.
I think of it as evolution. If you want to rely on luck - don’t track anything, let the natural selection work. If you want to control your destiny, think very hard about what you track and what you do when numbers change.
I saw both negligence of metrics and mindless obsession with useless ones, with the same results - frustration
there is such a thing as measuring too much, but looking at just "revenue" and watching it go up without understanding why and without looking at peripheral metrics isn't true understanding and leads to suboptimal results. Also, a product manager isn't going to be as data savvy as a data scientist
Yes.
I do wonder how much of the metrics obsession of the ZIRP, FAANG-growth era was basically papering over the fact that the big problems were solved and they had massively overtired ...
I don't want to defend the full suite of metrics obsession, but there are substantial downsides to only looking at revenue. The most straightforward examples of this are in the data space - if you have some idea for a clever data skipping optimization that will make your users' workloads much more efficient, but your team's performance is evaluated on revenue, you end up with very strong incentives to not ship it. Most products are going to face some scenarios where long-term success conflicts with a monotonically increasing revenue graph.
For sure, but I think proper vision, leadership, project management, actually talking to users/customers, and human intuition can get you to a better long term strategy than a dashboard full of derived metrics, meta metrics and meta meta metrics.
I don't think any of the greats in the space (Jobs, Gates, etc) likely looked at any of these dashboards once.
There is an argument to be made that the busy work and good pay of megacorps is basically designed to keep talent in place that could otherwise run off and potentially make a competitor.
The more time I spend working at FAANG, the more I think there's no grand conspiracy to keep us from competing, it's just bad incentives all around.
Want to get promoted as an engineer? Work on something BIG (even if no one asked for it). Want to get promoted as a manager? Get more people on your team (even if you don't need them). Want to get promoted as VP? Better re-org everything so people know you exist (even if re-orgs happen every year). And this problem gets compounded by the fact that the people who are best at playing this game end up making decisions that impact everyone else.
I often wonder if something can be done about this, or is it like a natural law when it comes to big corporations.
It's what happens when business majors/MBAs take over, and I'm not being cliche. Such people prioritize the "business" and see the products as a means to a monetary end, whereas IMO it should be the reverse. The business is part distribution infrastructure part funding mechanism for the products, nothing more.
I suspect part of the motivation for this is an environment that expects everyone to "work on something BIG" (as you say) all the time and a hyper focus on the short term.
Ambitious people who recognize the game will play it to advance their career regardless of how it might affect the long term prospect of the company or products.
Generally, a company culture that is more focused on delivering tangible value, and ability to recognize and fix behaviors that optimize for short term/career gaining behavior will succeed in keeping this in check. This gets much more difficult to do in larger companies and organizations though.
Maybe its some kind of "scale disease"; some natural law that will make larger companies incapable of innovation the bigger they get.
The problem that we were trying to solve, well, we had already solved it about 3 years ago, yet the team was still expanding, and our product managers kept spitting out new (mostly useless) project ideas.
One of the main areas of focus was metrics. Leadership was obsessed with metrics, we measured everything. Not just plain empirical metrics (like # clicks), but also very complex metrics, theoretical models that we had data scientists work on, etc. At some point maybe 25% of the team worked on metrics related stuff, it was crazy.
Why were they doing this? Was it only to keep us busy, so that the team can continue to grow? Or maybe they were desperately trying to find metrics by which they could prove the team was actually still doing meaningful work? I don't know.
At some point I left, and moved on to a much smaller team. This team was working on a brand new product, one that customers actually pay money for!
One of the first things I said to my new product manager was "so, what metrics do you care about? how do you measure success?".
He was taken aback by my question. He seemed genuinely perplexed. He paused for a few seconds, then said "metrics? What do you mean by 'metrics'? I look at revenue, when the line goes up I'm happy".
I thought it was... insightful.