
A software engineering manager guide to measuring an engineer’s performance - jonaldomo
https://www.jmoses.co/2019/07/08/software-engineering-manager-guide-measuring-performance.html
======
weliketocode
> It is not realistic to give an early software engineer defects within the
> first few months of a project assignment. So this skill area might not apply
> until a little later in their career.

what?

Please don't listen to this. Give junior engineers bugs right away. It forces
them to set up their environment for debugging and begin to understand the
flow of the project.

Even if he/she needs to be guided to the solution, this is a GREAT litmus test
for the standards of your documentation, and ease of environment setup.

~~~
lnsru
This!!! Nobody gained any skill from writing vanilla code or “hello world”
type programs. Experience comes from spending days and sometimes weeks on ugly
bugs. Bugs are perfect to level-up juniors quickly and find ones with weak
motivation. Because having no progress for hours is really really frustrating.
Went recently through this cycle teaching colleague. Found out, there is no
motivation at all.

~~~
bitL
Some people thrive when bugs are assigned to them early, some people thrive
when whole new functionalities are assigned to them. Usually intersection of
sets of these two kinds of people is close to empty and the worst mistake is
to assume bugs first are good or new stuff first is good automatically.

~~~
0xEFF
When you give someone new functionality to implement and they don’t have
experience fixing defects, they’ll likely add more defects along with the new
functionality.

~~~
bitL
I am in the group that prefers new functionality; I can build state-of-art
$10M business from the scratch in 3 months but if you ask me to debug old
stuff for longer periods of time, I'll most likely leave or underperform. It's
good to know what people you have on your team instead of assuming certain
traits.

------
guitarbill
> The Lake Wobegon Strategy famously coined by Google and Peter Norvig claims
> that you should always hire above your team average. Doing so increases the
> quality of your team.

Ugh, not this again. Obviously don't hire people who aren't good at their job.
But most improvements come from investing in your team.

> are they putting their code up for review [...]

Missed one: Are their code reviews/pull requests high quality? I.e. do they go
out of their way to document how they tested it? Reproduction steps? Do they
invest time in making code reviews as easy to review for other people as
possible? Or does their code reviews always take multiple rounds of review due
to sloppiness?

~~~
Consultant32452
Investing in workers is hard. I'm in a mentoring/leadership position. I
approach this in two ways. One is the general mentoring everyone gets during
regular stuff like code reviews. The other way is sometimes I see someone who
I think has real promise and kind of take them under my wing, give them lots
of 1on1 attention, etc. They improve exponentially and then leave. One guy I
mentored had been with the company for over 10 years. No one had ever
"invested" in him. He had a great attitude and work ethic, but terrible
skills. I taught him how to program. He left the company and doubled his
salary. That was a year and a half ago. This week he called me to thank me
because he was changing jobs again and doubling his salary again.

So my current employer went from a low skill high dedication worker to no
worker because I invested in him. I'm going to continue help/mentoring people
because I find it fulfilling, but if I'm honest it's bad for my employer. They
currently pay good salaries in line with the market. There's no way they can
justify giving a person who's been with the company for 10 years 4x salary
growth in 2 years. How could they even reasonably measure the market value of
his skills changing so much so quickly?

~~~
jimbokun
> They currently pay good salaries in line with the market.

> There's no way they can justify giving a person who's been with the company
> for 10 years 4x salary growth in 2 years.

One of these must be false. According to "the market", this employee is now
worth 4x previous salary, so then how can your company be paying "in line with
the market?"

~~~
Consultant32452
Ok, so it's more like the business is in the market for a $80k/yr developer,
and not a $300k+ developer. The words changed but the result is the same. The
work they're doing, it honestly doesn't make sense to pay anyone that much.

~~~
blub
This is what many are missing: for some jobs the company can only get so much
value, no matter how great the developer.

------
perlgeek
> Code reviews are probably the first thing a new engineer can start doing in
> a new role.

That really depends on what you want the code review to achieve.

Catch typos? Yes, a new engineer can do that.

Check if code fits into the existing architecture, adheres to the invariants
of the code base, uses base libraries idiomatically etc? I don't think a new
engineer can contribute that from the start.

> Creating metrics through issue trackers and time sheets

Which metrics? It's far too easy to create metrics that are easy to measure,
rather than metrics which actually increase the business when optimized for
(and developers _will_ optimize for / game a metric when it's used to assess
their performance).

~~~
shados
Code reviews have 2 purpose. Help the person whom's code is being reviewed
improve quality/catch issues, BUT ALSO the reviewer gets familiar with the
code being pushed and learn stuff. It's super important for new engineers to
review code as soon as possible for the later.

~~~
jolmg
> BUT ALSO the reviewer gets familiar with the code being pushed and learn
> stuff.

The reviewer is supposed to determine what goes in or not, so how can this be
an opportunity to learn from what goes in? They're the one who's supposed to
determine that! Is your idea of a reviewer someone who just spectates all code
going through? It's like saying people who don't know a subject should grade
work from students taking a class in that subject, as it's a great opportunity
to learn from what gets turned in.

~~~
BeetleB
>The reviewer is supposed to determine what goes in or not

This is not a given, and in fact, I would not recommend it.

For small code reviews with only one reviewer, the reviewer gives feedback and
they have a conversation about it, but with only one reviewer the developer
should have the final say. The problem with giving the final say to a single
reviewer is you'll get plenty of pointless code changes to suit the reviewer's
individual preferences. Don't waste time on what is a perfectly reasonable
difference in opinion.

If you have more than one reviewer, but still a small code review, the final
say should be with the reviewer(s) - if their is consensus (you could also go
with voting, with its pros and cons). Having multiple reviewers prevents the
likelihood of individual preferences dominating.

If it's a significant code review, the final say should be with an experienced
moderator.

------
Dayshine
"Process improvements" is a bit weird. You measure software engineering
performance by their dev ops skills?

It's a completely different skillset, and you're probably only measuring how
eager people are to have breadth of knowledge or how familiar they are with
your specific system.

Similarly, the explanation for "Debugging and troubleshooting complicated
issues" is a bit odd. Why is knowing where the log files are, and knowing how
to do complicated test setups for your specific environment a performance
measure. Again, that's not really measuring skill but familiarity.

The other two measures are just "Do they complete tasks" and "Do they follow
code review guidelines", neither of which are very good measures beyond
pass/fail.

The conclusion is: > Once a set of skill areas for a role is landed and agreed
upon you will want to make sure your team knows in advance what they are being
measured on

So, to do well in your business, I need to pick easy tasks, snipe code
reviews, make pointless CI tasks and spend all my time learning the build/test
processes not actually developing the product? :)

~~~
sanderjd
Your comment seems to fall into a common fallacy, that "developing the
product" is entirely done by writing the code. That isn't true, there is no
product if it is not built, tested, deployed, and debugged. Lots of
programmers consider this "pointless" grunge work, but there is a reason it
tends to be picked up by the more senior engineers on the team. This sort of
work has more foundational impact than just writing feature code; it benefits
all features written in the future.

~~~
Dayshine
No, of course other things are important.

But if you measure only ancillary things, that's a pretty bad measure.

I don't expect every one of my team members to understand the entire build
system, testing setup, logging system. That's a waste of their time. They
should know some, and perhaps one of the areas in depth.

~~~
sanderjd
My point is: Those things aren't ancillary. They aren't a waste of time. Good
developers can figure out how to use logs to debug issues. They can figure out
how to fix the build and deployment system when it's broken. They can figure
out how to set up testing environments. If not them, then who? Managers look
for people who are self sufficient. All of this stuff is a necessary and
important part of the job.

------
januzis
It seems to me there are two sides to engineer's performance: the ability and
the productivity. Ability measures how complex tasks can an engineer solve and
how well can he/she execute, and the productivity measures the actual amount
of work done. Able programmer is not necessarily productive, and productive
programmer might not be able to do tasks of high complexity.

As a technical lead, I feel that I'm able to judge the ability of individual
team members, but I'm having a hard time objectively judging productivity.
Simple count of PRs doesn't really tell the whole story, and some tasks look
simple in hindsight, when in reality it took a lot of effort to find a good
solution. There are also a lot of other complications I'm not going to dive
into, but the end result is that it's hard to have an objective productivity
evaluation based on the engineer's output only.

I'd be interested to know how other people evaluate individual productivity?

~~~
jrumbut
People say that no metric works but I think almost any metric works, for
example PR count.

You, a human being, would never actually confuse the engineers with 0 PRs
because they spent the last month playing online poker with the engineers with
0 PRs because you entrusted them with developing an automated deployment
system for a legacy application and starting a mentorship program.

Many metrics will spot the outliers. But what is the business value of knowing
when an engineer rated 82.13 vs 84.51?

~~~
sam0x17
Yeah but what about the highly skilled employee who spends 90% of their time
playing online poker and spends 10% of their time producing output on-par with
other people in the org that work 100% of the time. There is almost always one
of these in any given org.

~~~
januzis
I would actually be OK with it, as long as I can objectively say that even
though someone is slacking most of the time, he/she is very productive in
short bursts, so the overall productivity is on par with the team average.

The problem is that if you don't have an objective approach to evaluating
productivity, all sorts of biases come in, e.g. if I see someone coming to the
office at 11am and leaving at 4pm, I would subjectively rate their
productivity lower, even though objectively the results might be the same as
for someone who comes in early, and leaves late.

------
caymanjim
> Be a better management with transparent performance reviews and quick
> feedback with on focus areas.

This article contains dozens of grammatical errors (starting right off with
the title). Whenever I read something like this, I'm so distracted by the
errors that I can't even focus on what the author is trying to say. The
English language is dying.

~~~
jolmg
It's not just English. :(

------
sytelus
> Ability to write source code that adheres to specifications

This article reads like a series of bad advice from 1990s.

\- No, engineers shouldn't be writing code that "adheres" to specifications.
They should study, understand, question and contribute to problem statement
and approach at all times (also called "requirements" in pre-2000s).

\- Manager shouldn't be at center of evalauting performance but rather
establishing process, standards and collecting feedback and metrics.

\- Bonuses are inherently evil and would always motivate individuals to
exploit short term gains at the expense of long term sustainibility. Any
performance evaluation strategy must keep this issue front and center at all
times.

\- Large part of performance feedback shouldn't come from managers but peers

\- Performance reviews should never be entirely metrics-driven. No finite set
of metrics tell the full story and all metrics are susceptible at gaming.

\- Don't treat new comers as incapable of fixing bugs or do X but not Y. Don't
create class system of seniors vs juniors. Titles cause more troubles then
they are worth.

------
jimbokun
I think there's only one metric that is really effective for measuring and
evaluating software teams.

Tie their compensation to the product's financial success.

This means, to the greatest extent possible, everything relevant to the
product's success must be owned by the team and become their responsibility.
Maybe there are some cross cutting concerns that should be the responsibility
of a group separate from any product team, but those should be rare and
require a strong justification.

This will have an amazing effect of clarifying prioritization of what to work
on and figuring out how to deliver it as quickly and reliably as possible.
Suddenly the whole team will be in the loop about what features are most
important to the customers. Suddenly the things blocking new features demanded
by the customers from shipping will be cleared away.

I think for any other metric you can devise, either intentionally or
unintentionally, employee behavior will be optimized for satisfying the
metric, and not customer satisfaction with the product.

~~~
Rooster61
> Tie their compensation to the product's financial success.

This is an atrocious idea in anything much larger than a start-up. It leaves
the engineering team beholden to bad decisions made in other facets of the
company (sales, marketing, upper management). It's exceedingly frustrating to
write a solid, stable, performant program only to have marketing or sales push
it as something that it is not and nosedive the company due to customers
calling bullshit.

Start-ups are somewhat immune to this because of the lower barrier of
communication between departments, as many people will be wearing multiple
hats due to there being more things to do than people to do them.

This is very visible when executives begin choking off benefits across the
board. Nothing kills morale quite like losing your bonus because another
department isn't doing its job properly. I've seen more than one mass exodus
from a company as a result of this.

~~~
jimbokun
> It leaves the engineering team beholden to bad decisions made in other
> facets of the company (sales, marketing, upper management).

I mean, you're screwed anyways if those people can't do their jobs. No sales
means no revenue means engineers taking a pay cut or getting laid off.

So the sales and marketing and product management responsible for your
project, need to be on the same product team as the engineers, and have their
compensation tied to the product's success.

If the engineers see themselves as the natural enemies of sales and marketing,
the product is probably already doomed.

------
dfeojm-zlib
I think of software developers having multiple qualities:

\- coolness (cooperation, proactiveness, professionalism)

\- carefulness

\- integrity (ethics)

\- morale

\- cadence (speed) of work

\- skills competencies (matrix)

\- grit (badassery)

\- estimated time to completion multiplier

------
pmiller2
All I see here is a bit of advice on what to measure, and nothing on how to
measure it. The problem with measuring software engineer performance has
always been in the how, not the what, so this article is just noise, IMO.

------
slaymaker1907
Lake Woebegon is a horrifically bad strategy under even the most basic of
assumptions. There are not enough engineers in the world available for hiring
to create a single large tech company under such a strategy even with zero
turnaround. A fixed level of achievement can give you similar quality
(assuming you do have some level of attrition) but at a much faster rate.

The best strategy after looking at various strategies under simulation is to
focus on developing the people you already have since fixing your quality
through hiring is very difficult and expensive.

~~~
yowlingcat
It's a substitute for a strategy: in reality, it is a wishful thought
masquerading as a strategy. It doesn't give you any strategic framework for
achieving that by hiring, mentoring and retaining in a competitive landscape.
I find it of limited use and anachronistic.

------
gorzynsk
I know how to fix most problems with measuring an engineer's performance. The
best solution is to remove "manager" from measurement. Lead developer would
speak with all team members to assess performance of colleagues. Those are
those who know exactly who makes their work harder and who helps them everyday
whether by wise advice or by leaving clear code and thought through
architecture.

Managers will argue that they have so much responsibilities that they cannot
code together with teh team, but I again would say that the problem may be
reduced with reduction or higher management. Bussiness part shall talk with
engineers on feasibility of their vision and ideas without proxies who are so
in the middle that they neighter understand bussiness nor technology.

------
alexbanks
Yikes.

------
backtobecks
Are we really going to pretend that managers in sillycoin valley even bother
to be objective like this?

~~~
dang
I'm sure not, but could you please stop posting unsubstantive comments here?

