Second, the areas given and why I believe they're not as useful as alleged.
1. Activity Days
In the example provided "Activity Days" is measured by the count of lines of code that have been altered. First, this is easily gamed and second it doesn't actually tell you anything other than an engineer is (supposedly) altering code. It also doesn't have any insight into the other responsibilities that an engineer may have during a sprint that have nothing to do with code. I can only imagine a manager's manager looking at this and constantly asking why engineer Ben seems to never be active without ever actually understanding what it is that Ben is really doing day-to-day.
To quote the article, "The bigger is the impact the more will be the code and the project affected.[sic]" Impact is not synonymous with risk and risk is far more important to determine. Impact is a small part of the equation one can use it to help determine risk. That said, risk should be determined up front as part of pointing prior to any work being done. If a team can't properly point a piece of work while taking risk into account then the team likely doesn't understand what they're actually trying to do and need to do a research spike.
3. Code Churn
The whole point of agile is to iterate and try things. And some of those things will possibly even be thrown away completely. If a team finds itself in a situation where it has iterated the entirety of a two-week sprint on a single piece of functionality then that should be discussed in retro. The discussion should be to determine if the repeated iterations added value or if the team was merely spinning its wheels. This is up to the team to decide not someone who is looking at a "code churn" metric.
4. Work Time
Knowing how long engineering takes per work item _as a whole_ is far more valuable than breaking down an engineer's time into the areas described (avg pr time, avg review time, time to open pr). The reason for this is that an engineer's time is only a portion of the overall work done in a sprint for a work item. A team will generally have to do the following steps 1) design 2) engineering 3) quality assurance 4) documentation. Work items often have drastically different time requirements within those steps. A case that may take a day in engineering may require three in quality assurance, vice versa, or three in each.
Now, what should be measured.
Flow. The flow of work items, in a sprint, from start to finish through every step of the way. This requires using points to determine complexity, risk, and other agreed upon concepts (NEVER USE TIME hours, days, etc). Second and even more important, value. This is the most under observed and ignored attribute in agile/scrum or any workflow for that matter. If the product owner can't place a value score on a piece of work then the team should not be doing it. There is nothing more wasteful than producing a piece of functionality for a software product that nobody ends up using. Along with this, there needs to be a way to determine if the value score on a piece of work is accurate as part of a feedback cycle for the product owner and team. (Observing the life of a work item in a released product via usage metrics is one way of achieving this.) When a product owner does this correctly it's amazing to watch. You can't even imagine how ecstatic customers can be when they're constantly bombarded with high-value functionality.
For more information, I would recommend the following reading on this topic.
Then go here:
First, there are things that I agree with and others that I don't agree with. Work organization methodologies are a first approach to real problems but they are not the final solution. The metrics that I present are nothing more than an approach from which to face the problem and have data to make better decisions.
1. Activity Days not only shows the modified lines or the affected units. It generates a report of all the activity in that sprint if you click on the blue button and it shows you all the tasks performed in that sprint with an associated impact value.
2. Impact is not associated with risk at any time, it is associated with an algorithm that determines among other things the level of dependencies affected, message of the commit or lines affected by a commit.
3. Code churn is nothing more than an approximation to know when there has been more flow of changes in a specific repository.
4. The flow of all work is something we are working on in scope.ink for the next releases. We will integrate exactly what you have written: how the entire flow of a task works from the moment it opens until it closes and how the work is connected between the software engineers. In any case, software factories help them know the average time of tasks to make better cost projections.
Thank you for your contribution knight and we keep in touch!
I realize looking at your responses my question is the following. What is a Project Manager expected to do with these metrics?
A singular example, what's the point of knowing Impact? What action am I supposed to take if I know the Impact? My immediate inclination is to use it to determine risk.
What you should expect when viewing these metrics as Project Manager is to know the workflow of your engineers, not the risk associated with each of the tasks.
In relation to your question, the impact determines the level of involvement of each PR within the repository. Not necessarily greater impact implies greater risk, but a better quality of the task in short.
In the end, what we need is data to make decisions based on something palpable.
I agree with your statement, "In the end, what we need is data to make decisions based on something palpable." But the part I'm not convinced about is that the data being provided is the right data to use in making a decision.
Does your response, "Not necessarily greater impact implies greater risk, but a better quality of the task in short." imply that there is a relationship between Impact and quality?
I read through a number of your other blog posts and still don't have a clear understanding of what it is that a project manager (or others) should be doing with all of this information.
Can you give me a use case example of how I (as a project manager) would use an Impact visualization in my decision making process?
Thank you again for taking the time to answer my questions.
As you can imagine, we are a Startup actually consolidating our product. We are in Seed stage so every feedback we receive (like yours) makes us very happy and prouds and helps us tons! So thank you again for taking your time to ask me questions!
Since we are validating some of our visualizations and insights, not all people will find the value, while others do. In our case, some of our customers are using the "Impact" visualization to gamify some way the code process with the engineers so they are more motivated and have increased the quality of their processes overall.
You are probably right by saying that our provided data could be not the right data to make decisions. It will depend on you and the value you find in that information we provide.
As I told you, we are validating and you are free to talk to me and write me your pains if you have them!
Thanks again knight!
First, I understand that Scope is a startup and working a lot of this out so please take my response as constructive feedback.
I suspect that you're missing my point. If you're selling this product to me then you need to sell me on how it's going to make my life as project manager, CTO, or engineer better. Saying that it's data for decision making is a start. But my perception on our conversation is that every time I've tried to pin down a concrete answer on "What decisions does this help me make?" you've responded along the lines of "This is the data and this is how it's created." That's not telling me what immediate actionable steps this data will allow me to make and to make better. Subsequently, how those actionable steps will lead to more productivity and better quality.
Hopefully that gets across what I was attempting to understand.
Best of luck to you and Scope!
Sorry if I misinterpreted you at some point. Of course, I fit your criticisms constructively and I have always tried to answer your questions not with the aim of saying "this is what you get, you draw the conclusions".
I hope my answer is more conciliatory and you can understand our value proposition! :)
First of all, Scope's sense is to help IT Managers have data. What kind of data?
1. Know the workflow of engineers: frequency of commits, pull-requests and revisions. With this data, we can know exactly the productivity peaks, the days where you work the most and the time slot where you are most productive.
2. Know the duration of the tasks: with a historical data, software factories can adjust budgets better and have a better idea of how long it can take them to finish a specific project for a client. We can see two things: the duration of tasks at the project level and at the engineer level. We help to know the evolution of time by days, weeks and months on both levels.
3. Know how the tasks are related among engineers, (this is on the roadmap, not in production). We want to throw data to know how the tasks are distributed among the engineers, how the reviews work among them, what level of involvement exists, how the workflow with the PRs improves, number of comments by type of task or by person, .. We are working on providing data that can help to better structure processes internally. With this, we want to get engineers to communicate more with each other and increase motivation and the level of involvement among colleagues.
4. Impact of the tasks. At a low, medium and high level. We establish criteria based on an algorithm (constantly evolving) based on good practices within the code: comments, revisions, affected dependencies, modified lines, labels, commit message, type of task, etc. With this, we help you understand the impact level of each pull-request in the code. We help detect talent, lack of motivation, progress from a junior to senior engineer, ... We want to connect with Sonarqube tools to really see if that impact we reflect is directly related to the technical debt in Sonarqube.
And on the roadmap, there are more things. We want to make a very cool tool, and feedbacks like yours help us greatly!
Once again, thanks knight! Anything, write me again!