

3 lessons engineering should learn from sales and upper management - epetre
http://blog.in-sight.io/3-lessons-engineering-should-learn-from-sales-and-upper-management/

======
phkahler
Metrics are very hard to apply to engineering, and software development in
particular. This has been shown many times. Metrics became all the rage with
statistical process control, where you monitor production of widgets. The
problem is that engineering is by definition not creating the same thing over
and over again - if you think it is, go use Excel to send email, and word for
web browsing. There are lots of similar things among different pieces of code,
and hence design patterns. But good metrics have proven difficult. Lines of
code produced turns out to cause people to add redundant comments, and if you
exclude comments it causes other things. Closed bugs/issues is better, but to
be really effective you need to identify how new bugs are introduced. Some
people create as many as they close and that would look good if you monitor
closures. Other things have been tried with varying success. The core problem
is that engineering is not a well defined process and never will be.

Another way to look at is this: none of the marketing metrics he mentions have
anything to do with peoples performance. Click rates are a measure of the
effectiveness of a marketing strategy, not necessarily the effectiveness of a
person in marketing. You use that data to drive decisions about how much to
spend on different marketing activities. The equivalent for engineering is to
determine how much to spend on each developer. In other words, who to keep and
whom to fire, and that shouldn't be a really dynamic process. Even if you had
good metrics on development, what knob are you going to turn to cause a change
in those metrics?

------
russelluresti
"1\. Start tracking and improving performance based on relevant metrics

I'm talking about bounce rates, conversion rates, click rates, followers,
value per visit, return on investment on ads, customer satisfaction, and so
on.

Granted, some of those are vanity metrics, but we'll touch on that another
day."

"based on relevant metrics"; "some of those are vanity metrics"; rage quit.

Aside from that, the point he's making is superficial. There are certain, very
important, things that are extremely difficult to track back to a simple
metric. This argument comes up all the time in code refactoring. You can't tie
a refactor back to a trackable metric like what he's pointed out. However,
refactoring code can lead to many performance boosts, such as decreased bugs
and faster development of future features. But this is very difficult to
attach a number to. You can't say "for every X lines of code we refactor we
diminish future feature development by 10% and reduce bugs by 50%". It's just
not possible.

Also noting that things like "customer satisfaction" or "number of bugs
reported" weren't on this guys list of "relevant metrics".

Neither were items like "employee happiness" or "pride in your work".

Look, I'm a big fan of data-based decisions, but that can't be the end-all-be-
all of your decision making process. Honestly, just the first point of this
article makes it sound like working for this guy would be like working in a
development sweat shop.

So, nope, not even reading your other 2 points because your first one is just
that bad.

------
msandford
"Monitoring the software engineering process" is quite literally management's
job. If they can't be bothered to do it and I'm supposed to do it for them,
I'd like a raise. A big fat one. And the title and expense account to go with
it.

Oh, that's not in the cards? Do your job, then. Figure it out. If you can't
perhaps you're not fit to manage engineers.

Software engineering is an empirical process, not a deterministic one. Can't
handle that? Get out.

~~~
epetre
It's their job, but sometimes it's easier to argue or prove your point with
some clear graphs and objective metrics. I see it as a tool to improve for
developers and as an objective view of what really happened.

Sure it helps managers do their jobs, but I think that in some cases we might
not even need a manager at all.

------
xhrpost
I think the metrics idea for developing may have value, but I also feel you
could start to answer some of these questions simply by asking the developers.
We tend to know if we're all of a sudden spending our time just on bug fixing.
Or if a project keeps going back and forth between development and QA or
management. We can often tell when something isn't efficient. At that point,
it comes down to making some tough and potentially uncomfortable decisions to
change the process across the team as necessary. This change is often
difficult to make happen. Perhaps metrics will help convince non-developers to
get on board?

~~~
epetre
You are absolutely right, managers could ask developers for those metrics.
It's actually how it's it's done right now, in my experience.

The value I see for developers is to get some objective metrics on a feeling
they have. If they think something is inefficient, they should be able to be
able experiment with their process. But that needs to be done a bit more
objectively.

I think developers could also use this as proof that that the problem is not
always with the team but sometimes it may come from management.

The scope creep is a good example of this. A manager keeps adding stuff to the
current iteration and the dev team gets swamped. A widget could clearly show
the impact of that scope variation.

I see value for the dev team, end-users and management if it's done right.

------
TYPE_FASTER
> What if we had a nice way to figure out whether: ...metrics...

Microsoft's Team Foundation Server has a lot of this built-in. Check out
Visual Studio Online, I believe they have a free level.

The way I was taught Scrum was this: metrics are there to help the team become
more efficient, not for performance evaluation purposes.

