It seems to me that companies might want to implement OKRs as a way to align everyone on the same goals and check progress. The actual numbers aren't too important since everyone is free to miss their targets. OKRs don't give as much steering control as some might hope.
You just need to make sure that this doesn't mean people are consistently "lucky" or "unlucky."
I was on a team where app updates were deployed using a canary system. A small percentage of users (say, 1%) received the update first, then the team watched for incoming crash reports from that cohort. If it looked good, the feature was rolled out to a few more people, and this was repeated. This allows you to identify a problem by only negatively impacting a relatively small percentage of customers.
The problem occurs when the calculation to determine which cohort the user belongs to is deterministic. In this case, the calculation was based on the internal ID of the user. This means some users always get the updates first, and deal with bugs more frequently than other users. Conversely, some users are so high in the list that they virtually never get an update until it's been tested by a wide user base, so their experience is consistently stable.
Or have the username be a number that is all the feature flags when converted to a binary representation. Then you can just have one username for each combination you want to test.
The important part is the stability - if your usernames can change then they aren't stable so you don't select it.
I think it is a good reminder that most things you think of as being unchanging that are also directly related to a person.. aren't unchanging. Or at least any conceivable attribute probably has some compelling reason why some one will need to change it.
That's why you have internal user ids instead of using data directly provided by users.
Will it cost an extra lookup? It's cheap, and if you really need to, you could embed the lookup in some encrypted cookie so you can verify you approved some name->id mapping recently without doing a lookup.
Wait, we're talking about maliciously injecting bugs into your employer's software so they have the maximum impact, right?
Clearly, making sure that 1% of all teams gets fired for being unable to run unit tests, then slowly ramping that by a few percent each review cycle is a good strategy.
Ideally, the probability of breaking would drop off exponentially as you moved up the org chart. Something like "p ^ 1/hops_to_director_of_engineering" would work well. The trick would be getting the dependency to query ldap without being detected...
I've used the hash of username+string trick before for a flag. I used it to replace a home-grown heavyweight A/B testing framework which had turned into a performance bottleneck.
reply