
Book recommendation: Measure what matters - seshagiric
https://www.gatesnotes.com/Books/Measure-What-Matters?WT.mc_id=20180516172000_MeasureWhatMatters_BG-LI&WT.tsrc=BGLI&linkId=51788990
======
tacostakohashi
I still see this thing of expertise being overvalued in many organizations,
and it's completely toxic. I find it's more common and successful private
companies, but it can happen at public companies too.

Typically, you get people that joined 10+ years ago, they implemented certain
key systems / features when the company was small, nimble, and not too many
actual customers to deal with. They probably worked long and hard to make that
happen, probably largely without interfere Then, the company / product is
successful, they promoted, have a kids, and end up being a full-time "expert"
who writes emails and has opinions on present-day implementation details and
changes, but is nowhere to be found when it comes to actually implementing
anything or dealing with practical, day-to-day problems.

Meanwhile, the more junior, recent joiners who _are_ doing all those things
get no say in the direction of what they're working on, and eventually leave
to be replaced by the next batch of new joiners.

~~~
setr
I was that expert recently, but in a university lab, with undergrads as my
juniors... I spent 90% of my time trying to convince people NOT to rip out
everything and start from scratch; not to depend directly on database details,
not to shove everything into mongodb and react, not to implement a new full
stack for every subsection of the project they were working on.

One guy was supposed to have modify an android app to update repeatedly every
5s for a simple, one-user management view; started it by setting up a firebase
instance

Another decided that rdbms was no good, and spent the entire semester trying
to convince me to switch to mongodb, because "mysql can't handle the number of
requests we're making" (~50/day, maybe ~1k/day long into the future).

A third guy decided the message queue was a bottleneck, and came to me with a
proposal to reimplement it; after about 15 minutes, I finally pulled out from
him that it would take an expected 6 weeks to implement, and EVERYONE had to
stop working in the meantime. Simply looking at the logs, the message queue
was clearly not a bottleneck, and there was no reason it couldn't be worked on
while the current one stays useful...

My primary job was just keeping the project from burning down, let alone
improved. And _every_ one of those students thought I was holding the project
back, as I desperately tried to maintain a working system.

That might have just been the inexperience of undergrads, but I have some
sympathy for your "experts". Everyone's just gobbling up the marketing,
confidently pushing ideas derived from inexperience and no one has any respect
for history (at least, in my uni). I was also only one year split from the
project, and it had changed from 2 developers to 20, so not the same scale as
you're suggesting

~~~
senorsmile
It can't be overstated how important it is for more senior level members of a
team to be aware of this happening. Very often a junior has "great" ideas, but
they are based on very little factual data. They are not yet able to see the
whole picture, and it is the job of seniors/managers etc. on the team to guide
them.

------
wenc
Management techniques come and go, but at the core of management is an
understanding of _systems /process_ and of _human behavior_. A really good
book on management is Andy Grove's "High Output Management" [1], which to me
strikes a good balance. It's a fairly popular book in SV and probably a known
quantity to most HN readers. It's also notable in that it wasn't written by
some management guru or business prof who's never managed anyone in their
lives (i.e. Drucker), but instead drawn from the experiences of a CEO of a
significant company (Intel). The blue-collar equivalent is Plain Talk by F.
Ken Iverson, former CEO of NuCor Steel. [2] (not the author of APL -- that's
Ken E. Iverson :)

[1] TLDR [https://medium.com/@iantien/top-takeaways-from-andy-
grove-s-...](https://medium.com/@iantien/top-takeaways-from-andy-grove-s-high-
output-management-2e0ecfb1ea63)

[2]
[https://www.goodreads.com/book/show/1450374.Plain_Talk](https://www.goodreads.com/book/show/1450374.Plain_Talk)

~~~
rossdavidh
"What gets measured gets improved". Which really means, "what doesn't get
measured, will be traded off for what does".

~~~
bluGill
More importantly make sure you are measuring what really matters and not a
proxy. I shut down our code coverage builds some years ago when I realized
management was using that as a measure - the potential negatives from
measuring that is far worse than any possible gain from an engineering
improving anything. The measure is still useful, but until I can be convinced
it won't be abused I won't measure it.

~~~
ehsanu1
They were using it as a measure of what exactly? I'm having a hard time seeing
how it can be interpreted in a way that's a net negative.

~~~
Jach
What happens when a team's code coverage drops below the mandated minimum? How
do different teams' coverage numbers affect their value ranking against other
teams? What's going to stop teams from gaming the number with techniques like
[https://www.pavelslepenkov.info/?p=110](https://www.pavelslepenkov.info/?p=110)
?

Lots of net-negative consequences can occur when management decides to measure
things. Lots of net-positive too, otherwise they wouldn't ever do it, but
developer productivity proxies are notoriously hard, I'd question any manager
trying to make one with whether they've ever done or read about Deming's red
bead experiment
([http://maaw.info/DemingsRedbeads.htm](http://maaw.info/DemingsRedbeads.htm))

~~~
Bjartr
That red bead experiment is an interesting illustration, thanks for sharing.

------
tomaha
The problem is to determine what matters. It sounds nice but especially at the
bigger companies which measure pretty much everything today it's one of the
reasons that there is no creativity. It optimizes short-term goals (and most
of the time focus on money/engagement) and discourages long-term goals or
riskier approaches. IMO that's because innovative companies turn into
optimized earning machines but produce little useful.

~~~
chiefalchemist
> "The problem is to determine what matters."

Spot on. Problem solving is relatively easy, the hard part is problem
identification. As a rule of thumb, The Five Whys is a wonderfully simple (but
effective) tool for problem identification.

[https://en.wikipedia.org/wiki/5_Whys](https://en.wikipedia.org/wiki/5_Whys)

------
edpichler
I am using OKRs with a small team, and the results are being very good.
Basically it´s a very clear communication tool. When used correctly, the team
simply knows the measurable things company expect of them.

OKR´s helps on "Mastery", one of the three pillars of the book "Drive: The
Surprising Truth about What Motivates Us".

This book, "Measure What Matters", is my next reading. Thank you HN.

~~~
sixhobbits
I recently finished Drive and loved it,and I'm also working with a small team
where we recently implemented OKRs. Now working through "Punished by Rewards",
and also really enjoyed Andy Grove's "High Output Management" (in spite of the
title) and the original "Flow" book, in spite of its esoteric style.

Would love to hear what else you've read and enjoyed in this area.

~~~
edpichler
Nice your books recommendation, thank you. It seems we are facing similar
challenges. I will save them on my personal kanban to read them later.

I recently finished Traction, that was very useful to me, I discovered I was
so ignorant that I was blind about the marketing area of my product.

Also, there is a interesting PDF, may you can found useful, if you need to
avail other peoples work:
[https://www.dropbox.com/s/79v5vmprkz7dpb9/Nobody%20gets%20cr...](https://www.dropbox.com/s/79v5vmprkz7dpb9/Nobody%20gets%20credit%20for%20what%20never%20happened.pdf?dl=0)

------
beambot
Too often, OKRs seem to be tied to quarterly or bi-annual peer reviews. I find
that cadence limiting -- especially in a fast-paced or small team. A
compelling alternative is SMART goals, which force you to be concrete about
the OKR goals:
[https://en.wikipedia.org/wiki/SMART_criteria](https://en.wikipedia.org/wiki/SMART_criteria)

SMART is an acronym: Specific, Measurable, Assignable, Realistic, and Time-
Bound. You can adjust the timing to be a bi-weekly sprint, monthly, etc.

YMMV.

~~~
btian
If you read Doerr's book, he specifically said OKRs should not be tied to
performance reviews, compensation, and promotion. Also, OKRs should be
aspirational.

~~~
gowld
Has tech company management read Doerr's book?

~~~
btian
Which company? I'm sure Larry Page has.

I was just pointing out that the complaint was address in the book. Not sure
why you're asking about management.

------
tanderson92
The story is tragically ironic given the Gates Foundation history with the so-
called "Small Schools Myth":

[https://marginalrevolution.com/marginalrevolution/2010/09/th...](https://marginalrevolution.com/marginalrevolution/2010/09/the-
small-schools-myth.html)

[http://www.nbcnews.com/id/38282806/ns/business-
bloomberg_bus...](http://www.nbcnews.com/id/38282806/ns/business-
bloomberg_businessweek/)

~~~
gowld
I wonder how much Gates was blinded by bias -- Gates himself attended a
small... expensive, private school.

~~~
t3h2mas
I'm listening to "Thinking Fast and Slow" right now. In it, the author talks
about Gates school study, and others, as well as bias.

The author claims that small sample sets always produce the most extreme
results, and while statisticians know this, measurements continue to happen
using smaller data sets.

The author also claims that you could find data to support smaller schools
being worse than larger ones due to the same issue.

Great book. It leaves me questioning the 'why' behind everything I think I
know.

~~~
iooi
Careful with drawing too many conclusions from that book. Your take away is
pretty ironic, considering that was Kahneman's problem: "I placed too much
faith in underpowered studies" [1]

[1]
[https://replicationindex.wordpress.com/2017/02/02/reconstruc...](https://replicationindex.wordpress.com/2017/02/02/reconstruction-
of-a-train-wreck-how-priming-research-went-of-the-rails/comment-
page-1/#comment-1454)

~~~
tylerhou
Chapter 4 contained most of the errors on priming; the rest of the book is
still solid.

------
neves
I've worked in a companies where everybody knows what matters, but everybody
decided do measure what was easy to measure. So everybody optimized for make
points in what is easy to measure. A lot of times it was counterproductive.

------
vowelless
Bill Gates has talked about this general concept before. Here is one of his
Annual Letters titled " Accelerating Impact through Measurement":
[https://www.gatesnotes.com/About-Bill-Gates/Why-I-Write-
an-A...](https://www.gatesnotes.com/About-Bill-Gates/Why-I-Write-an-Annual-
Letter)

------
valeg
I recommend also "The Tyranny of Metrics" by Jerry Z. Muller [1] for a
balance.

[1] [https://www.goodreads.com/book/show/36644895-the-tyranny-
of-...](https://www.goodreads.com/book/show/36644895-the-tyranny-of-metrics)

------
iamleppert
Intel has been on the decline for years now. They have tried to innovate new
product lines and have mostly failed in almost everything they have tried to
do. They are invisible in the mobile space, and soon even Apple won’t be using
them in their products. They failed completely in IOT (Edison), still don’t
have a 1:1 competitor with NVidia, and are going to miss out on the self-
driving car revolution. They don’t really have any kind of Cloud or software
business. Basically they’ve been riding on x86 and that gravy train is showing
it’s age. They have recently been doing some cool things with drones, but I’m
not sure that’s going to last and I fully expect it to fail eventually.

Operational style management is important when you have a product that needs
to be tuned and improved incrementally. But that same style of management is
ill-suited and emotionally bankrupt for creative types who are the source and
inspiration of the product vision in the first place and it shows in Intel’s
floundering roadmap.

One size fits all problem management and single minded solutions do not work.
A process is a tool but not a replacement for empathy nor thinking.

~~~
aventrix
With regard to the self-driving car revolution, last year Intel purchased
Mobileye. Just today it was announced that they closed a deal to include their
hardware/software in 8+ million cars. So don't count them out yet.

Not to discount any of your other points, which seem quite on point. Apple
surely has been itching to replace them with their A series chips.

------
rb808
Has anyone actually read this and liked it? Its a recommendation I'd normally
follow but sounds like just another MBA class.

~~~
cpeterso
I read the book and it is OK. It provides some history behind OKRs but I don't
feel it is a practical reference for using OKRs. There are better resources
online. I like this introduction to OKRs:

[https://medium.com/startup-
tools/okrs-5afdc298bc28](https://medium.com/startup-tools/okrs-5afdc298bc28)

And Google's re:Work site as a practical reference:

[https://rework.withgoogle.com/guides/set-goals-with-
okrs/ste...](https://rework.withgoogle.com/guides/set-goals-with-
okrs/steps/introduction/)

------
arikr
To anyone who reads "Measure What Matters," I strongly recommend as an
additional book "How to Measure Anything" \- I think it'll be really useful in
enabling people to measure things that they may have previously assumed to be
immeasurable, which means that these things can then be better optimized for
and improved.

------
Double_Org
I worked at a consulting firm that sells this sort of philosophy. A big part
of my job was explaining basic concepts from statistics and measurement to MBA
types. A big problem with quantitative management approaches is that the
people who are in charge of implementing them have weak math/statistics
ability.

------
nappy-doo
I used OKRs at Google for years, and I love them. I have never really thought
to apply them to my personal life, but I remember people at Google having
personal OKRs they published. Having read this, and thought about it outside
the work environment, I think I might do the same in my life.

------
tootie
I heard the term OKR for the first time earlier this week. I'd always used the
term KPI (key performance indicator).

~~~
erikstarck
Similar but not the same. A KPI is backward looking/lagging while an OKR
should crystalize an ambition and look forward.

------
LaundroMat
Too large a focus on measuring can turn into only valuing what's measurable.
Beware the McNamara fallacy [0].

[0]
[https://en.wikipedia.org/wiki/McNamara_fallacy](https://en.wikipedia.org/wiki/McNamara_fallacy)

------
kuwze
Could someone please help me? I read an article a while ago about how someone
in the gates foundation was using bayesdb/bayeslite to evaluate risks of
investments. I have tried every search I can think of on hn.algolia.com but it
eludes me.

------
oh_sigh
Sequel: You can't miss what you can't measure.

------
justherefortart
I love Bill Gates but I've never heard Microsoft's management to be anything
other than terrible.

Stack ranking which they used for god only knows how long is a joke. It makes
people focus more on the game and politics than doing a good job.

~~~
jonknee
He hasn't been in charge of Microsoft for almost 20 years and stack ranking
was actually introduced by Ballmer.

~~~
justherefortart
Fair enough, but based on the biographies and books I've read about the first
decade+ of Microsoft, treating employees like shit was pretty common.

Hell, there's a reason Paul Allen left and never returned (Balmer and Gates
talking about getting his shares back when he was diagnosed with cancer.)

~~~
pinewurst
Another data point is anyone who’s familiar with the workings of the Gates
Foundation knows it’s a snakepit. Yes, Bill and Melinda mean well and it’s far
from Zuck’s scam foundation, but it’s hardly an efficient organization.

------
vanderZwan
I assume the book is supposed to tell me what matters, because based the blog
sounds like the idea is to decide what matters and measure that. Well, maybe
you have the wrong idea on what matters, and even if you _do_ have the right
idea, you may have the wrong idea on how to measure it. And even if you _do_
have the right idea on how to measure it, you need to interpret it, put it
into context. So many turns at which this can go wrong.

If the book doesn't give some really good explanations for these things, it is
akin to telling someone to eat healthy: I don't need to be told to eat less
sugar, saturated fats, and more leafy greens. Actually implementing that diet
in a way that I can keep up permanently is trickier than one might suspect,
intuitions tend to be way off. Good advice would actually help me with _that_.

