Hacker News new | past | comments | ask | show | jobs | submit login
The more metrics you track, the less you know (breakingpoint.substack.com)
139 points by kiyanwang on Dec 17, 2022 | hide | past | favorite | 93 comments



At <FAANG mega-corp>, I used to work on an internal team which basically did nothing important, certainly nothing users ever think about.

The problem that we were trying to solve, well, we had already solved it about 3 years ago, yet the team was still expanding, and our product managers kept spitting out new (mostly useless) project ideas.

One of the main areas of focus was metrics. Leadership was obsessed with metrics, we measured everything. Not just plain empirical metrics (like # clicks), but also very complex metrics, theoretical models that we had data scientists work on, etc. At some point maybe 25% of the team worked on metrics related stuff, it was crazy.

Why were they doing this? Was it only to keep us busy, so that the team can continue to grow? Or maybe they were desperately trying to find metrics by which they could prove the team was actually still doing meaningful work? I don't know.

At some point I left, and moved on to a much smaller team. This team was working on a brand new product, one that customers actually pay money for!

One of the first things I said to my new product manager was "so, what metrics do you care about? how do you measure success?".

He was taken aback by my question. He seemed genuinely perplexed. He paused for a few seconds, then said "metrics? What do you mean by 'metrics'? I look at revenue, when the line goes up I'm happy".

I thought it was... insightful.


There is a happy middle ground somewhere between those two :)

That revenue metric is a trailing indicator. Metrics that give insight on whether users are having a good experience can lead revenue metrics significantly. Also this may be orthogonal to what you're talking about, but operational metrics are useful to understand costs (eg. if every new feature increases memory usage, eventually that will have a cost impact) and avoid volatility (eg. operational metrics often presage outages).


> Metrics that give insight on whether users are having a good experience can lead revenue metrics significantly.

Oh, yeah. If you are able to discover those metrics, there's a billion-sized market opportunity to apply them. You will certainly be able to get a share of those.

We have some partial proxies, that stop working almost as soon as you start to make decisions based on them, and a huge bunch of snake oil. But the metrics you are looking for do not actually exist.


Sure they do, they're just very product-specific and an unsexy slog for the reasons you're highlighting. It is definitely a valid optimization to not even try, and just stick to very granular trailing metrics. But that doesn't mean there is never any way to look at how people are using a product over time and draw valid conclusions that can be used as input into good decisions.


> way to look at how people are using a product over time and draw valid conclusions

Well, that exists. Metrics that summarize those conclusions is what don't exist.

Metrics are a different thing from information. You can discover if people like your product, it just takes effort and isn't accessible to people without any knowledge on the subject.


Let me give a concrete example. Say it's a subscription service. Say you track three metrics:

- Revenue. - Subscription renewal / cancellation by monthly cohort. - Signups.

Say the second metric shows that you have a meaningful subscription drop-off after 18 months. By comparing that metric with the signup metric, you might be able to successfully foresee a revenue drop-off months before it materializes in the revenue metric alone, buying valuable time to figure out what to do.

Tracking a revenue metric alone always runs the risk of surprising you in real time when it ticks down and you have no idea why.


Hum... That metric doesn't tell whether the users are having a good experience.

If that was your point, I misunderstood your comment. Yes, there are metrics that will help you predict problems. My comment wasn't about this.


Ah I see. Yep that other comment I made was uselessly hand-wavy.

The point I was trying to make, starting with my initial comment in the thread, is that there is a middle ground between "we're spending way too much time devising and implementing really fancy fine grained metrics" and "we only look at revenue".

My concrete example makes this point much better, and it seems like we're in agreement, so carry on :)


Heh, a more dismal view of why metrics that lead revenue are being tracked with such fervor, it's so the internal stockholders can either purchase more, or file with the SEC to sell off in the required timeframe before it's too late.


I mean, internal metrics also let them change something. If it's able to predict earnings in a few months, surely changing whatever is driving those metrics has value.


trading on inside information does move the stock market price in the direction it should be moving and makes the market more rational and efficient so portfolio theory buyers and sellers of the stock make better decisions.


One of my consulting customers came to their account team a couple of years ago and said "We want help setting up a cluster, and specifically we want metrics and logging sorted out" - the account team brought this to me, so I said "OK no problem, what are you hoping to monitor, and what decisions will you be making based on this data?"

Dear reader, you may be unsurprised to learn that they had no fucking clue what they were supposed to be monitoring, or what they would do with the data. I declined the project.


Isn’t this a case where the stock answer (RED / Golden 4 metrics) plus a link to the appropriate chapter of the SRE book is all you need?

If you wire up a simple Datadog / Honeycomb dashboard with those, plus log exploring, you have added lots of value that will definitely be used, for a team that clearly has no idea why they need o11y.


OK now I'm cracking up, I didn't realize o11y was a thing.

I was joking with my GenZ teammate that we use s5s (servers) and macroservices. Can we make m6h (monolith) a thing too please?


Can we go back to autocorrect and just spell the words out?

At this point they seem like some sort of secret handshake of the Eleven Society of Extraordinary Consultants.


> Can we go back to autocorrect and just spell the words out?

Those are two different requests. Autocorrect may be a step up in legibility from word length indicators, but it's a big step down from typos.

I just swiped "accessibility" with a somewhat-below-par amount of care and attention, and the output was "aggressively".


Metrics are great if they measure the correct thing and are part ofba properly defined metrics system, and not just a dashboard. E.g., in material flow, every metric should measure onr process / dub-flow performance and output, as this outpit is used downstream. Special emphasis on interfaces between teams and departments. Ideally, those low level, high detail metrics are consolidated into a few high level ones. This makes it easier to measure overall performance, and do diagnose root causes.

Most, well almost all, metric systems I encountered in my life so failed misserably at that.


I work on a FANG team. I feel like the stakeholders and upper leadership have fallen into McNamara's Fallacy, and I have no idea how to get them out.


"The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide." https://en.wikipedia.org/wiki/McNamara_fallacy


I forget where I found the quote, but the older I get the more I apply it: "Don't expect people to change their minds, expect them to lose."

As a corollary, everyone knows they should change their minds based on new information, but shockingly few actually do or they only do it after it's too late. So if you find yourself working with/for someone who consistently and successfully changes their perspective based on new information, heavily weight keeping that person(s) around in your future plans.


One approach that I think is really valuable in the face of this is helping people to lose ASAP while there's still time to change.

I think that was one of the key insights that united the early Agile people, the ones who pioneered it before it turned into a certification/consulting scam. Releasing early and often to actual users enables a level of discipline and humility that's hard to achieve otherwise. I think this was taken further by the Lean Startup folks, where you were supposed to be explicit about your hypotheses and then construct tests to validate/invalidate them. E.g.: https://rulez.io/wp-content/uploads/2019/05/validation-board...

I'm sorry it never caught on widely, but it has stuck with me. On any project I'm on, I structure the coding work such that as early as possible we can see if we are having the impact we aim for. That inevitably sucks early on as we put barely-adequate things in front of people and frequently get negative responses. But it really pays off over time, as you get to kill bad ideas early and use the savings to explore real solutions.

Sadly, I don't think this approach scales, at least with current management culture. In the short term, managers and execs benefit a lot more from seeming right than from being wrong in ways that lead to them eventually being right.


>> "Don't expect people to change their minds, expect them to lose."

Excellent insight

(sadly)


You don't get them out, you get out yourself.


To where? The contagion has spread pretty wide including very small companies.


I think McNamara's Fallacy is increasingly becoming standard management these days. It's a growing annoyance for me sadly.


Metrics are a must to understand complex systems, as you can’t just comprehend them. Like economy or complex business.

Without understanding, you will be making decisions based on gut feeling, which is ok if you are Jobs, and not ok if you don’t have a magical vision for the product that will be a success.

I think of it as evolution. If you want to rely on luck - don’t track anything, let the natural selection work. If you want to control your destiny, think very hard about what you track and what you do when numbers change.

I saw both negligence of metrics and mindless obsession with useless ones, with the same results - frustration


Interesting take. Do you have a set of metrics which where particularly useful? What where they?


There are many, like GDP for countries. Temperature for your body. Etc.

They are not perfect, but without them it is virtually impossible to know what is going on.


Performance comes to mind. If you get reports that your app is “slow” it’s useful to know exactly what the culprit is.

If you set monitors on E2E perf metrics, you don’t even have to rely on your customers telling you either.


Hopefully you look back later in your career and cringe at our PM's response for not understanding or knowing the value of metrics.


there is such a thing as measuring too much, but looking at just "revenue" and watching it go up without understanding why and without looking at peripheral metrics isn't true understanding and leads to suboptimal results. Also, a product manager isn't going to be as data savvy as a data scientist


Yes. I do wonder how much of the metrics obsession of the ZIRP, FAANG-growth era was basically papering over the fact that the big problems were solved and they had massively overtired ...


I don't want to defend the full suite of metrics obsession, but there are substantial downsides to only looking at revenue. The most straightforward examples of this are in the data space - if you have some idea for a clever data skipping optimization that will make your users' workloads much more efficient, but your team's performance is evaluated on revenue, you end up with very strong incentives to not ship it. Most products are going to face some scenarios where long-term success conflicts with a monotonically increasing revenue graph.


For sure, but I think proper vision, leadership, project management, actually talking to users/customers, and human intuition can get you to a better long term strategy than a dashboard full of derived metrics, meta metrics and meta meta metrics.

I don't think any of the greats in the space (Jobs, Gates, etc) likely looked at any of these dashboards once.


There is an argument to be made that the busy work and good pay of megacorps is basically designed to keep talent in place that could otherwise run off and potentially make a competitor.


The more time I spend working at FAANG, the more I think there's no grand conspiracy to keep us from competing, it's just bad incentives all around.

Want to get promoted as an engineer? Work on something BIG (even if no one asked for it). Want to get promoted as a manager? Get more people on your team (even if you don't need them). Want to get promoted as VP? Better re-org everything so people know you exist (even if re-orgs happen every year). And this problem gets compounded by the fact that the people who are best at playing this game end up making decisions that impact everyone else.

I often wonder if something can be done about this, or is it like a natural law when it comes to big corporations.

If only we had metrics to solve this!


It's what happens when business majors/MBAs take over, and I'm not being cliche. Such people prioritize the "business" and see the products as a means to a monetary end, whereas IMO it should be the reverse. The business is part distribution infrastructure part funding mechanism for the products, nothing more.


I suspect part of the motivation for this is an environment that expects everyone to "work on something BIG" (as you say) all the time and a hyper focus on the short term.

Ambitious people who recognize the game will play it to advance their career regardless of how it might affect the long term prospect of the company or products.

Generally, a company culture that is more focused on delivering tangible value, and ability to recognize and fix behaviors that optimize for short term/career gaining behavior will succeed in keeping this in check. This gets much more difficult to do in larger companies and organizations though.

Maybe its some kind of "scale disease"; some natural law that will make larger companies incapable of innovation the bigger they get.


Yes, it's the people not the metrics.

Problem is solved in the second sentence:

>It’s a mistake to ignore the people that make those businesses work,

True.

>but you need to understand the numbers to know how the business is doing.

False.

You need people who understand the business even more so without exact numbers, who can hands-down outperform those who rely on metrics instead.

That's why they were called "businessmen".


There's a difference between evaluative metrics and diagnostic metrics. You have a handful of top-level metrics that tell you whether things are good or bad on a dashboard. When they change, you have to dig in to see why, so you need more metrics. You segment, you cohort, etc.

Otherwise, you see GMV is down, and you don't and won't understand why.


I heard an analogy that goes like this: As a driver, all I want to see is current speed and RPM. But I like knowing that my car tracks a hundred other things, too, because when it fails, those other metrics will come handy. Or alert me before it fails. I heard it in the context of people analytics[0] but it seems to apply everywhere.

[0] https://youtu.be/WE2ePETzzYQ?t=1434


Your car is also using metrics to keep you safe, maybe even alive at many moments. Active stability control and ABS are massively data-driven and essential parts of modern driving safety and are always on.

I think there’s also an issue of layers of decision making and the requisite metrics at that layer. Abstraction is useful in business just as it is in operating a car.


> evaluative metrics and diagnostic metrics

I am stealing this, I called them "Opportunities & Firefighting" & "Heath / KPI Metrics" in a recent post


There is other quote

Don't be data-driven, be data-informed


One thing I see far too rarely in choosing metrics is ensuring that every metric has one or more countervailing metrics to balance the choices made. In other words, if you are optimizing for a metric, you also need to measure what is being sacrificed for it. The classic trio of cheap/fast/good is a great example of this, as are most engineering-style tradeoffs, like capacity vs range.

You will often see managers create metrics for the things they want to improve, without realizing that making a metric without measuring the countervailing metric is creating an incentive to throw the countervailing metric in the shitter. A good example of this is call centers which measure the time of call resolution without measuring customer satisfaction with the call. This is how you end up with a low quality, high volume call center. You can do the opposite by measuring only satisfaction and not speed. Now, maybe what your business needs is high volume, low quality, but that should be a choice you make. The problem with not measuring the countervailing metric is that nobody (officially) cares about how bad that metric gets. So, if you have a tradeoff situation, you automatically get an extreme even though that may not be the most profitable place on the spectrum. Perhaps your call center would be better for the company if it was volume 9 quality 2 instead of volume 10 quality 1, but if you only optimize for volume you can't make that choice.

This even applies to metrics you really want to go up, things like sales and profit. A manufacturing company that optimizes only for sales while ignoring manufacturing metrics will hire salespeople in preference to manufacturing workers until the inability to deliver product starts affecting sales. That might sound like a nice problem to have, but it can be a real problem, and perhaps sales 9 manufacturing 2 instead of sales 10 manufacturing 1 would result in better sales next year. It's better to have metrics which allow you that choice.


That's basically recognizing that Goodhart's Law exists and fighting it.

Unfortunately managers rarely have incentives to do it. Why invest in being more objective if it lowers the perceived value of your successes? Your bullshitting peer that isn't doing this will get promoted instead of you.

It can also be hard to even understand what are you sacrificing and why. I recently had an example where product manager was trying to improve conversion while another team was tapping into cheap low-quality traffic. PM always looked at a global conversion and kept wondering why the conversion was dropping despite his best efforts.


Absolutely true. Managers absolutely dont want countervailing metrics.

I noticed a huge problem at a FAANG in a 40 person team and proposed the design of a single metric that incorporated all costs including countervailing metrics.

There were no takers for the project. The senior manager leading 4 teams of 40 engineers told me privately.

Your proposal is relatively cheap to implement and I can't ask for more HC for this and doesn't help me with empire building.

Your metrics will make the team achievements look smaller and weaker.

Your metrics will anger many of the teams, because the team switches the metric to target each half. In your example, targeting customer throughput one half and customer satisfaction the next half thus running a merry go round. The team has become comfortable switching metrics each half and it is simple work.

If your metric is set as a target, the engineering problem becomes a lot harder and ICs will be upset.

So, my proposal was canned.


That was nice of him to share that with you.


Well, he didn't call it empire building himself :D That's my word. And I have been working with him for several years already, so discussions are frank!

TBF, it can be a justifiable POV from a /r/antiwork POV. Why work terrible hours to make someone else rich!


To re-paraphrase a common saying: Half the metrics I track are useless, the trouble is, I don't know which half.

Obviously for core business metrics there's a direct cost associated with measuring, validating, and maintaining access, but even for more technical metrics that's true.

The issue is, without a comprehensive metric collection and analysis system tracking multiple metrics, troubleshooting reduces down to a red light / green light level, which is not super useful. Is your site "down" because your uplink has gone offline, your servers, or is it your metric collection system throwing a false positive?

Same can be said for business metrics - is revenue down from customer acquisition, deal size, or orders per customer? If you don't track more metrics than the minimum, it's hard to tell.


>Half the metrics I track are useless, the trouble is, I don't know which half.

You can measure how correlated metrics are to key metrics. You can ask questions like "Does the rate that my app crashes affect the amount of money people spend when using the app?"


The author's framework-"you just start small and only add one at a time as needed"-and given example illustrate exactly the process that leads to too many metrics. Pick a few metrics, add another, add another, add another. If you have 5 teams working on a product you quickly end up in the 50 metrics scenario.

Too many metrics can be a problem but it's not the real problem. The real problem is choosing metrics without any regard for the decisions they're supposed to inform.

When you understand that the purpose of all measurement in business is to reduce uncertainty for a decision to be made, everything comes into focus, and you'll have a natural constraint on your scope of measurement.


That would be true if you didn't assign a cost to each metric that you add. It was when the cost of tracking metrics plummeted that this started to become a problem, as there was no external friction in collecting more. If you assign a cost, and respect that cost, you shouldn't continuously add more.


I think the framing should be around what makes sense on a dashboard, not how many metrics total that the entire company (or whatever cohort) wants to collect.

A dashboard should not distract from what you're trying to do. It should show only the information you really need [1] so that you can focus on actually driving.

I worked at one company where we had so many signals on ours that it looked like the dash of a 747 [2] to me. Pilots go through a lot of training and tests to show that they understand those and how to respond to different scenarios! Software startups are a bit more seat-of-pants than that.

We can have dashboards for different purposes and roles, for different activities we want to drive. But the dashboards should focus on use. And if there are metrics that don't fit on any of the dashboards, they're not about driving anything - maybe consider scrapping them.

[1] https://www.pinterest.com/pin/58054282668632186/

[2] https://www.reddit.com/r/cockpits/comments/ayr4z7/boeing_747...


> Most companies have far too many metrics. Companies might have a dozen dashboards, each with 4-5 metrics, leading to 50-100 metrics being tracked at any given time.

I think it’s important to divide “internal” / operational metrics, which a team monitors but doesn’t expect anyone else to care about (say, error rate of the DB or some internal ops task latency) vs. “external” metrics which are rolling into OKRs / KPIs or otherwise being reported out as health metrics of the team.

I struggle to believe that most companies with 50-100 metrics are actually using most of them as external metrics.


YMMV but this was based on dozens of conversations with companies where they spent hours each week reviewing those dozens of dashboards. I was specifically talking about the metrics used by any given team, as you are right that different teams might use different metrics.

"The key is that any given person shouldn’t be using any more metrics than absolutely necessary to do their job well."


I think this article is just wrong, or at best describes the author’s experience in a dysfunctional org.

There are a lot of pathological metrics patterns. I used to work at a FAANG/MANGA that prided itself on being metrics driven; the challenge was that most people suck at picking metrics. The most common anti-pattern is choosing metrics based on what is easy to measure.

The most valuable metrics I have encountered are metrics that measure directly your teams success in their stated mission. The problem is a lot of teams have bad missions. I tried to tell every team that I worked with, that their mission should read like a problem statement not like a technology statement. For instance, instead of saying that “we are the team that owns the foo service“, the team needs to think about the business problem that inspired the foo service, and make solving that business problem their mission statement.

Once you have clarity on the business, problem that your team exists to solve, then you can start thinking about metrics that measure how well you are solving that problem. These are the most valuable kinds of metrics.

Now, the thesis of the article was that teams had too many metrics, and that this was bad. Once a team has clarity of mission, they have to implement technology to accomplish the business problem that is their mission. After you have clear metrics that tell you how well you were accomplishing your mission, then you need metrics to tell you how well your technology is functioning. Do not mix the two kinds of metrics. It is a happy coincidence if the technology functioning well directly, corresponds to how well you are accomplishing your business mission. More likely, the business metrics and the technology metrics need to be kept separate. You need metrics around the technology so that you can predict whether you are nearing a problem, detect whether you have a technology problem, And identify the nature of the technology problem that you have. Good metrics around technology will allow you to do all these things. You should not add metrics arbitrarily, but you should analyze your technology for where it is likely to break and prioritize adding metrics in that fashion.

My last point is about team just functions. Other anecdotes in this comments thread have described organizations that lacked clarity of mission or that had completely solve. Their mission yet did not pivot to a new mission. In those cases, you end up with a lot of make work, and that make work may consist in part of implementing new metrics. This is just plain org dysfunction and not really a metrics problem.


YMMV but this is based on my experience building some of the largest analytics platforms like Flurry and Outlier.ai which were used by hundreds of thousands of companies. The only dysfunctional company I worked at was Verizon and they... don't really use metrics.


There was a paper from Hitachi (I think) decades ago.

Their thesis was one should devise a metric to solve a problem, and discard it once the problem was solved. Then on to the next problem.


What prevents the problem from coming back if you stop watching for that?


Don't overthink it.

This advice covers 80-99% of actual bugs. Real bugs don't reappear once fixed -- if they do, it means they were never fixed. Metrics designed for a bug that was fixed may be relevant but specifically ARE NOT TARGETED at any new bugs.

Though all of this really highlights the core problem with metrics and dashboards and such in the corporate world: if the problem were a problem and not just a convenient political puppet, we'd have solved it by now instead of talking about what thinking about solving it looks like.


I think applying actual engineering wisdom to software in this case is a mistake. Nobody writes an automated test for a bug and then deletes it once the bug is fixed. You wrote the test in the first place so you get alerted when there is a regression before it gets into production. Regressions happen all the time.


In isolation, bugs don't suddenly reappear no. So if you only regression test on unit-level, it's very unlikely the test will ever fire red again. What you should do is write use-case regression tests. This will protect you from major regressions, such as "user-registration is taking several minutes", because this is a major bug that can occur for a million new reasons, even if last time the reason was isolated to "database query x was running very slow", your test should track the former, not the latter.


I would like you to talk to my programming team about regressions.


I think the argument in this (narrow) discussion would be you haven't fixed (or identified / addressed) the problem.


If the problem is solved, it does not return due to improvements in process. "Watching for it" bifurcates between actively putting into your view (the problem at hand) or by putting a monitoring daemon in place to inform you when a threshold has been broached.


Metrics can be interpreted in so many ways and is not the solution to the problem itself. A metrics provide some data points, but data points by itself does not solve any problem. That is why metrics is only part of the truth here. Consider everything else NOT taken into consideration by metrics


Put another way:

Metrics are simply knowledge. But knowledge isn't power.

Understanding is Power.

That is, consuming the metrics dots isn't enough. The difference making comes from understanding the broader context, as well as the connections between the dots.

Believing in "Knowledge is power" is a trap.


My favorite saying I've seen recently: knowledge is knowing that Frankenstein was the doctor, who created the monster. Understanding is knowing that Frankenstein was the true monster.


Very few folks consume metrics, most want insights and intelligence, but there is lot more focus put on the data pipeline.

Edit: I don't mean to say the pipeline is not important, it is required, but the outcome is intelligence


Using metrics as knowledge instead of applying them for understanding is just another case of cargo-culting.


It's easy to lie with metrics, but it's MUCH easier to lie without them.


My last company lost its mind and started getting into meta-metrics like:

X% of systems have metrics. Y% of uncovered systems get added to metrics in Q2. Z% of metrics captured meet their SLO.

Meanwhile the SLOs themselves were pulled out of thin air because management only talked to upper management and not actual users.


This sounds like a Kafka novel. I can see why you left.


Promotion Driven Metrics


Treat metrics like automated tests - if they are not green you have action to take.

You can thus have thousands but hardly ever will you have thousands all red.

And when you create a new metric, TDD style, the company will not be able / designed / changed to make it green.


Metrics also share some problems with automated tests - they can be green for years, they don't test anything useful, and the real problems are elsewhere.


But it's not always clear what is "green"? For example, New Users going up can be a good thing but New Users going up by a lot can represent a fraud attack. Metrics always require some interpretation to provide value which is why we have dashboards instead of alerts.

There are some metrics that are binary good/bad and I agree that in those cases you should just have an alert.


... but still rely on metrics. Yes, metrics are something easy to compare how things changed, but the system as a whole is more complex than that. There are things that are binary (you get there, you lose), there are things that are multifaceted and is not easy to put an linear number to it. And the Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart%27s_law) should be taken into account, specially when you lower the number of metrics. They may be a guide, but be careful if you put them as target.


I think of metrics like I do accounting. Accounting is all about measuring parts of the business. You track every dollar sent, lost, owed, etc. Every economic transaction is measured. But nobody wants to just stare at a list of raw transactions. You have to summarize those figures: you group them into revenues and expenses, assets and liabitilies, and you break it down by department, by function. You create a picture of what your business is doing. In the end, all some people really care about is profits, some people want a measure of solvency, others want to know how big your company is. Accountants have actually has thought deeply about metrics and written papers about why the measure what they do and I think something close to the golden rule is: does this information have the capacity to influence someone's decision?

Information is meant to help you make a decision: should we add more servers, should we end this service, can we sell this feature, are our customers satisfied? Every metric should help you answer a question. Ask yourself, what decisions are you trying to make and what information would help you make that decision?


It is very easy to see the benefits that metrics provide (typically you start at no metrics at all), but incredibly hard to measure the costs.

This bias is ubiquitous: metrics, unit testing, abstractions, startup funding, documentation, security, etc. All of those typically start at zero. All of those are perceived as "good", therefore more is always better. You rarely find people with the balls to say "we need less documentation" or "we invested too much effort in security". Kudos to author for recognizing the same issue with metrics.

And he's right: the solution is to recognize our bias and for things where cost is hard to measure just assign some cost to it. Though instead of million dollar value suggested by author you should just treat cost as exponential.

Having 3 metrics is better than 0.

But having 50 metrics is way worse than 3.

In fact, having 50 metrics is likely way worse than 0.


Very relevant to me right now.

But doesn’t the “ What About the Details?” section acknowledge that you will many of the other metrics later anyway for different people, projects and problems? Isn’t the message then “You should know which metrics are relevant to you and only look at them?”


Yeah, it's hard to generalize these kinds of things since companies can operate so differently. I've seen very large companies where everyone can use the same 5 metrics and others where each team needs their own set of 5. I think the key thing is that no team is using anymore than absolutely necessary.


The focus on "data driven decision making" tends to get things stuck in a weird local maximum and result in https://en.wikipedia.org/wiki/Goodhart%27s_law .

"Not everything that counts can be counted, and not everything that can be counted, counts."


I agree with the sentiment but the example is not very compelling. Tracking averages is its own pit of problems.

To be able to understand your funnel in a marketplace you’ll need to at least track profile and product views & click-throughs in addition to the metrics mentioned. How can you possibly tell what’s wrong when all you have is a average GMV metric going down?


I agree, but I think there is a difference between metrics and having data to investigate. A metric is a value you are tracking over time, but when something changes you will likely need to do an investigation. That investigation might be the raw data, it might be some dynamic queries or it might be other metrics.

Tracking metrics because you think they might be useful in the future for diagnosing other problems doesn't make sense with modern systems as dynamic queries are so fast.


Priority metrics vs primary and secondary metrics if this is a tool that help us?


Counterpoint: you don't have enough good metrics.


The beatings will continue until metrics improve


Ha, and I bet there is a metric for that too!


Indeed, knowing what metrics need to be monitored is vital.

But to do that, you need to understand the system producing the metrics.


Not interested, based purely on the title. The article may have insight, but these anti intellectual clickbaitesque headlines are too tired to heed


You can never have enough metrics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: