TL;DR; this is a lightweight open source alternative to tools such as GitPrime. This project started as way to learn Rust on nights/weekends, but as an engineering leader, I do believe that there exists a real gap between the creators of software and the stakeholders. Maybe something like this could help– curious to get some feedback from the HN readers. I will include the following three thoughts/assertions:
There is a fundamental gap between engineers and the stakeholders in the technology and value they create. This can manifest itself in many ways: including mismanagement, low feature velocity, and misaligned incentives. A big reason for this is that software and systems are complex and difficult to reason about– particularly at organizational scale. To make the situation even worse, there are virtually no commonplace leading metrics to represent the health of an engineering organization. Project tracking software and traditional velocity metrics are extremely gameable, inconsistent and fail to capture the deeper context in the code itself. Infrastructure and product-level metrics generally break down with respect to net-new feature development. A potential solution is to utilize source-level metrics– based on commits, pull requests, and continuous integration and deployment data.
The involvement of individual contributors is essential in bridging this gap. In order to meaningfully utilize source-level metrics, it is vital that the people who know the code the best have full transparency and input into the process. Moreover, they should be able to help tune and ensure that the metrics are actually meaningful by incorporating their own knowledge of the specific codebases involved. Not only are existing tools in this area expensive and inaccessible, but they are also generally black boxes that are sold and brought in by management. Having deep roots in open source and being accessible to the engineers themselves is key.
Source-level metrics are ineffective for stack ranking individual engineers. Engineers share a deep aversion to measuring individual performance by source-level metrics. This aversion is well-founded as the contribution of a single code-change to the overall value of a system is impossible to accurately quantify. Furthermore, at the individual-level, programming methodologies such as pair-programming further confound things. In aggregate, however, the meaning of these metrics increases and can be used as leading indicators of an entire organization or team. Moreover, It is in the best interest of an individual contributor to have some form of data-driven decision making at the organizational level versus the alternative.
This looks interesting. I agree with your point that there is huge gap between engineers and stakeholders. And a lack of understanding on how value is created and how it is attributed. I would like to understand how source level metrics can help me better understand my own contributions to the project. Do you have some examples/stores where you used the matrices in your team to understand the contributions?
Also a comment about about ELK. I recently found this full text search project called Toshi: https://github.com/toshi-search/Toshi . It is written in Rust and provides (almost) same functionalities as ES. But is less heavy than ES and probably easy to deploy. Do you have any thoughts on it?
Since December I've been working very hard on measuring developer _output_, software CapEx, and organization level indicators. All with as little overheard to devs as possible. Your thoughts are amazing in distilling the challenges of understanding what you can do measure software development, without succumbing to the financial reporting request of "just have them fill a timesheet".
There is a fundamental gap between engineers and the stakeholders in the technology and value they create. This can manifest itself in many ways: including mismanagement, low feature velocity, and misaligned incentives. A big reason for this is that software and systems are complex and difficult to reason about– particularly at organizational scale. To make the situation even worse, there are virtually no commonplace leading metrics to represent the health of an engineering organization. Project tracking software and traditional velocity metrics are extremely gameable, inconsistent and fail to capture the deeper context in the code itself. Infrastructure and product-level metrics generally break down with respect to net-new feature development. A potential solution is to utilize source-level metrics– based on commits, pull requests, and continuous integration and deployment data.
The involvement of individual contributors is essential in bridging this gap. In order to meaningfully utilize source-level metrics, it is vital that the people who know the code the best have full transparency and input into the process. Moreover, they should be able to help tune and ensure that the metrics are actually meaningful by incorporating their own knowledge of the specific codebases involved. Not only are existing tools in this area expensive and inaccessible, but they are also generally black boxes that are sold and brought in by management. Having deep roots in open source and being accessible to the engineers themselves is key.
Source-level metrics are ineffective for stack ranking individual engineers. Engineers share a deep aversion to measuring individual performance by source-level metrics. This aversion is well-founded as the contribution of a single code-change to the overall value of a system is impossible to accurately quantify. Furthermore, at the individual-level, programming methodologies such as pair-programming further confound things. In aggregate, however, the meaning of these metrics increases and can be used as leading indicators of an entire organization or team. Moreover, It is in the best interest of an individual contributor to have some form of data-driven decision making at the organizational level versus the alternative.