Hacker News new | past | comments | ask | show | jobs | submit login
OKRs Aren't Going to Fix Your Communication Issues (craigkerstiens.com)
324 points by saisrirampur on April 1, 2019 | hide | past | favorite | 165 comments



Incase others like me don’t know - OKR is Objectives and Key Results. https://en.wikipedia.org/wiki/OKR

I honestly have never seen that acronym but apparently popular from Intel management practices?


I hate it when people use TLAs without defining them.


Ditto. Head-scratching, elitist jingoisms tend to signal an antipattern of business theater.

Plainspeak > newspeak.


I added "what_the_fuck_are_okrs_craig" to the end of the URL in the hopes he gets the message when he looks in his 404 logs...


unless you keep spamming it to make up a significant amount of logs for the site, i doubt anyone would notice...


It might be useful to include a link so people from here can spam it. Like this: http://www.craigkerstiens.com/2019/03/29/okrs-arent-going-to...


Are there people who do this?


I look at 404s (and other errors) in prod. Helps find front end bugs and any malicious requests. A 404 should be exceedingly rare in a mature application (depending on the application of course and users' ability to create AND delete data) so it's often a useful metric for figuring out if something's wrong. I don't usually tail the logs and use monitoring instead, but same effect. After a certain scale, I imagine this becomes harder to do and requires different tooling.


Admittedly I'm grumpy today. I sometimes do this "404 trolling" to yell at people.


the knowledge of this has made me happy. :)

Thanks.


HTML has an <abbr> tag - please use it.


That takes effort though, and people use TLAs to reduce effort.



ETLA :p


Me too. It seems like every other week people will be spouting some three-letter acronym that you've never heard of. This forces you to ask what it means, and they will always look at you like "what? You don't even know?"


This is a good indicator of a team I don't to work with.


Translation look ahead buffers? That brings back a lot of memories from my computer architecture course.


Is that not tlb?


Hmmm, that actually makes much more sense.


I feel like everyone missed the joke


no jokes on HN, this is serious mum


If you work in ICT, it's unavoidable.


They're in the same bucket as those who use light gray text on a white background.


Isn't it great that TLA is a TLA and FLA is a TLA, but FLA is not an FLA?



real pedants just pronounce it.

rhymes with claw! :-)


My company uses them in a cargo-cult sort of way. I have yet to find anyone at my company who can explain to me what an OKR really is, but apparently I'm supposed to have them entered in to our HR system.

Previous jobs have used them too. They're pretty popular I think.


Number one way for management and so-called leadership to befuddle their way into the arms of Goodhart's law


I think Google or one of the other big tech companies also use OKRs instead of the more traditional KPIs (Key Performance Indicators)


OKRs are Conjoined Triangles of Success irl. Lots of companies use it around the bay (including Google).

They think it’s a godsend because suddenly they can measure their progress in some way that is not just “story points per sprint” ignoring the fact this measurement only makes sense within the system itself and may not reflect the reality at all.


A good thought exercise is to ask yourself, "Is it possible an engineer achieves every OKR yet is the worst employee at the organization?" and "Is it possible an employee achieves NO okr and is the best?"

I feel I could think of examples. There are so many dimensions of valuable work (no security holes, being nice, not reinventing the wheel, documenting well, long-term stability/refactorability, code being reusable by other teams) but I imagine OKRs can't cover all those facets.

It almost seems as fruitless as trying to measure friendship by OKRs.


For this reason, Spotify now uses OKRs for teams and not people:

https://hrblog.spotify.com/2016/08/15/our-beliefs/


At Time Out, we used to set OKRs at team level, and then prompty discard them and the carefully constructed plans that went nowhere when faced with the reality.

So now we're setting the Objectives, and measure the KPIs that are defined to reflect the objectives. The actual means to move the KPI, which would be the KR (key results) in OKRs, are no longer defined in quarterly planning sessions, but are selected and updated by the team during sprint planning.

This gives the teams a large independence in What they are doing while still maintaining business alignment and meaningful measurements.


From what I understand, that is the idea. KRs are the KPIs you are going to focus on each quarter.


Where I work right now. They use OKR on division level.


Both are possible, yep. But OKRs tend to be written down pretty often (quarterly is what I've seen, but maybe monthly is a thing too). If a bad employee is cruising or a great employee's work is not being recognized quarter after quarter, that's a direct managerial problem and not really what OKRs are trying to solve.


> If a bad employee is cruising or a great employee's work is not being recognized quarter after quarter, that's a direct managerial problem

management introduce OKRs so that they don't have to solve actual managerial problems - the OKRs _are_ their solution! That it misses a bad employee cruising, or a great employee's contribution is a footnote. After all, when the system is using an OKR to guide the actions of employees, and you go against it, it's basically career suicide.

I personally don't like OKRs or any performance metric that isn't directly tied to customers' actions (e.g., growing customer numbers, or growing revenue etc).


Which is why okrs are untied from promos at Google for instance (in theory that is =))


Well most people don't set personal okrs, or at least not formally, so it's probably true that theory matches practice.


why don't you give examples


Aside from it being somewhat nonsensical, the industry has been saying KPI for decades as have other industries.



Anecdotal story: I know a company that is too messed up to measure/quantify key indicators so as a solution they switched from KPIs to OKRs.


I've only experienced OKRs at one company, and I felt a sort of a weird disconnect between OKRs and actual work/performance. On the one hand, OKRs were presumably critically important, but on the other, they weren't directly tied to individual performance. I still have trouble wrapping my head around that.

And I found those ratings (0, .4, .7, 1.0 for hitting a stretch goal) just a sort of weird self delusion, like setting your watch 15 minutes early to ensure you'll be on time. You can fool yourself a little, but eventually it stops working, and so, for example, .7 become the de facto "real" target.

Secondarily, I found that as a team lead, to the extent that OKRs were stressed, any non-OKR related work became highly disincentivized. Refactor? Write more integration tests? Hell no, not if it doesn't directly impact OKRs. We had stories in the backlog that really should have been done because they would have helped other teams and yielded a positive return, but anything non-OKR was just dropped on the floor.

Third, they didn't really provide any value to the team that I could discern. We didn't have to look at our objectives every day to know what to do. We had typical releases/epics, etc... to do, and on a day to day basis, the OKRs just receded into the background. In theory, I assume that the OKRs are there to guide which releases/epics/stories, etc... you do, but in our case, we had a pretty clearly defined product already prior to introduction of OKRs. So the OKRs were all just sorta "OK, finish this thing we're already working on."

That said, the company was new to them and in the process of learning. Perhaps we did them wrong, or perhaps I missed the point.


> Secondarily, I found that as a team lead, to the extent that OKRs were stressed, any non-OKR related work became highly dis-incentivized. ... I assume that the OKRs are there to guide which releases/epics/stories, etc... you do, but in our case, we had a pretty clearly defined product already prior to introduction of OKRs.

Good OKRs provide measurement and alignment. Measurement is handy -- the idea is to say 'I want to do O, and I believe X, Y and Z are sufficient to achieve it'. Since all these variables are measurable, you can see whether the problem is that you failed to achieve X, Y, and Z, or if your plan wasn't actually sufficient to achieve it.

Alignment is a bit more nebulous. For small teams, it may be unnecessary. But as orgs grow, there's room for mission creep and contradictory goals and efforts. Good OKRs are designed to be able to connect up the org chart. Staff should be able to explain how their contribution serves the mission, rather than their own insular fiefdom. And if they can't, it might indicate they're not doing what the rest of the org is planning around! If your engineers are focused on quality metrics like number of tests or SonarQube metrics but management doesn't see the platform as anything other than a proof of concept to find product-market fit with, that's a problem.

What your experience describes is a feature factory[1], and I don't think OKRs really change that. You can think of OKRs as management laying out how the backlog should be prioritized, and the fact that nobody in management was pushing for quality is a sign. Either they didn't know how to measure quality and impact, or they felt it wouldn't serve the overall mission. And these are all good things for everyone involved to see and recognize.

[1]: https://hackernoon.com/12-signs-youre-working-in-a-feature-f...


> Alignment is a bit more nebulous. For small teams, it may be unnecessary. But as orgs grow, there's room for mission creep and contradictory goals and efforts.

In my experience this is the single biggest thing that trips up organizations which have grown, or been acquired. It also seems to be the thing that people can't wrap their heads around because it feels difficult. If whole divisions aren't aligned then it doesn't really matter how well your team performs because the lowest common denominator division becomes the limit.

This is especially noticeable around acquisitions where the incoming team naturally wants to operate as if nothing has changed, but they may be really out of alignment with the acquiring company. With enough time and management pressure alignment might occur, but it's normally a painful process of the acquiring company taking over aspects of the acquired company. Rarely, if ever, are acquisitions comfortable, but the other other results can be a bit better.


> That said, the company was new to them and in the process of learning. Perhaps we did them wrong, or perhaps I missed the point.

Hard to say, but I used to do this for a living as a consultant and FWIW much of what you are describing sounds like a poor selection of OKRs of the kind we were often fixing.

> So the OKRs were all just sorta "OK, finish this thing we're already working on."

Case in point. My engagement manager once joked that if I proposed a "% complete" metric like that I'd be removed from a project. I _think_ he was joking.

> And I found those ratings (0, .4, .7, 1.0 for hitting a stretch goal) just a sort of weird self delusion

That's another symptom of poor goal selection. If you can miss a goal and somehow receive partial credit it was a poor goal. If you can exceed a goal and get extra credit it was a poor goal. Targets should be selected because they need to be hit, not just to give you something to aim at.


>That's another symptom of poor goal selection. If you can miss a goal and somehow receive partial credit it was a poor goal. If you can exceed a goal and get extra credit it was a poor goal. Targets should be selected because they need to be hit, not just to give you something to aim at.

Do I understand what you are saying? OKRs should not be roofshots or moonshots, but just commitments? If so, that's not what the majority of literature I've read on OKRs says. Hmmm....


Granted most of my time was with balanced scorecards, but, I mean just reading the wikipedia[0] page here I don't see anything about OKRs being moonshots?

[0] https://en.wikipedia.org/wiki/OKR


Why didn’t you make OKRs out of refactoring, integration tests, etc.?


IMO, it is difficult to write an OKR for this sort of work that is focused on measurable impact.

"Write an integration test for xyz" is not really what OKRs are designed for. You can do that (and lots of people do), but it tends to be frowned on. The reason for this is that it's not clear how it rolls up into the larger company OKRs. Where does "wrote integration test for xyz" fit into the company's "increase monthly active users by x%" KR?

I think the general trickiness is that OKRs are inherently backward-looking. A lot of technical improvements have to do with risk mitigation. The risk of nasty bugs (testing), the risk of future functionality being slow to develop because of poor architecture (refactoring), etc. But it isn't clear (to me) how to write OKRs for mitigating future risks.


| "Write an integration test for xyz" is not really what OKRs are designed for.

This gets it backwards. This is a task. Why are you performing this task? Especially without being tied to specific development? If you can't justify doing it with your/teams/companies stated OKRs, you actually shouldn't be doing it. Even if you feel you should because 'its the right thing'. Even looking at it this way gets it backward.

You should be looking at your OKRs and figuring out what you should do to get there. But, you might have to break it down a little further.

Indeed, a KR of 'increase monthly active users by x%' is not directly actionable by an engineer. But an engineering department can come up with its own OKRs that fit in that direction.

For instance, to achieve that goal it might be necessary to develop new features. However, if 70% of the engineering time is spent in bug-fixing then they're not going to be able to do that. (Do you know where your team spends its time? That might be worthwhile to figure out).

So, an objective of 'Increase development time for features' (or some such) might be considered. If you find you're dealing with a lot of bugs, one of the key results might be to reduce bugreports by 50%. To do that you might argue that you should add some integration tests so that you can change code and catch issues before they get to production so that in the future you'll be spending less time on bugfixing / rework, so that you can work on new features that will entice new users to full fill the company objective.


How do you measure the impact of rare but very expensive flaws without inventing phony metrics to pretend to fit the tasks you think need doing?

"Don't have a catastrophic data breach"

There isn't a middle ground where there is a linear correlation between a serious compound failure and fixing bugs or writing tests.

You could come up with a methodology for a "security score" and track that. However it is both a transparent workaround for having specific bugs / tasks in OKRs and it is likely that your invented metric will be bogus and will lead you to do irrational things.


IMO, you don't. Operational security isn't (or shouldnt be :)) a specific _ business goal_ for a company. With that I mean it's not a specific objective to achieve: it should be a base principle that should be part of your engineering (and operating) principles that you apply to do anything.

Put differently: you have a way of working that includes applying measures for security. For instance, you might do threat modeling at an early stage of your project/iteration. Those sessions may result in concrete work to perform. You might also perform a security test at the end of an iteration that confirms things have been covered. That adds to your workload and affects your ability to achieve goals in a certain way. You apply this way of working to move toward a goal.

So, over time, you will find out that applying these principles have a certain outcome. It may be positive: "we're not seeing security issues and development pace is OK" and it might be negative ("security tests are coming back bad", "we're hacked" or "development pace is snail-like"). When it's negative, that means you have to do things that will slow down achieving your goals.

That kind of negative feedback should lead to a poor(er) performance score on the KR when reviewing them. It's this kind of feedback that should factor back into your decisions moving forward to achieve the goal. In this example, if you have negative findings, you might choose to educate engineers on security matters, streamline your engineering principles, increase the team size and/or even replace people.


If you put off security work until "we're hacked!", it's too damn late. You can't bolt on security after the fact. Similar for testing, refactoring, etc. Your approach here is a recipe for putting off important work until it's too expensive or flat out too late.


I agree. I shouldn't have added 'were hacked'. It distracts from what I'm trying to say. Your conclusion that I'm saying is a recipe for putting off work is exactly opposite of what I'm trying to say.

My point was that you should have a certain set of sane engineering principles (security being one area they should cover). They should be sufficient to todays standards. These principles are not/should not be business goals: they are tools in achieving goals in a responsible and reproducible way.

I am also saying that if you get feedback that these principles are keeping you from you should include them your evaluation in determining next steps to move forward without dictating a specific manner of how you should deal with them; that's up to the specific situation at hand.


> "Don't have a catastrophic data breach"

There is no planning process that can effectively deal with long tail existential risks like that. It's silly to fault OKRs for not solving it when nothing else has.

Best case scenario is something along the lines of:

Objective: reduce annual carry cost of IT Data breach insurance KR: Do project X to insurance co.'s satisfaction to lower our premiums KR: prevent regressions in existing compliance by running regular audits

While this outsources the scoring problem to an insurer, at least they have multiple customers to amortize over and extract some data from.


When would you say you are "done" for a given period with "not having a catastrophic data breach", regardless of OKR? I know some tasks, such as this, is never really done. But for a given period (such as a month or a quarter), it does not help to have a task that just says "Don't have a catastrophic data breach" - you have to turn that into something you can actually do, within a given timespan, and that works fine with OKRs (and should be done without OKRs as well)


My objection to or misunderstanding of OKRs lies exactly here where there are no satisfying answers to this question.

If a sizable chunk of work can't be covered by OKRs then why are you using them or how do you use them with the understanding that they cover partially and inconsistently work and achievements.


That is an actual objective at Google. Supporting key results would be things like performing a certain number of audits, conducting pen tests, code reviews, etc.


OKR work is a subset of engineering time. We deal with things like interviews, meetings, tech talks, training, migrations, and maintenance work outside the framework.

You should always be making progress on OKRs, but it’s not the only thing you do. Engineering has its own overhead, not every hour is billable.


I'd say OKR work is/should be all work. Each task should be aimed toward moving forward on the goal. It doesn't mean however, that you can directly link one specific task to the OKR. It's just part of the strategy to achieve the goals.

Let's consider a KR of 'increase active users by X%'.

* Interviewing/hiring isn't going to move the needle on getting X% more active users by itself. Having more engineering time available for new (critical) feature work might, however and you have to do interviewing to get new hires.

* Training in and of itself isn't going to move the needle either. But having a more knowledgable engineering team probably will, by being able to move faster or better. You have to do training to improve knowledge.

* Maintenance work might not move the needle on active users FORWARD (or it might, if you have a lot of bugs that prevent users from actually using your stuff) but it might prevent it from going BACKWARD.


Refactoring is a really tough sell for an OKR. We use OKRs and we have had a couple quality initiatives in the past that were tied directly to OKRs and even then it’s hard to get that work prioritized. At the end of the day as long as users aren’t complaining product is going to index on features and value added. I’ve never been somewhere that would let you make a goal around redoing something that already works (even if it works like shit). I think the reason for that is it’s hard to come up with a key result that can be measured once the work is done — other than “it still works the same”


I've been thinking about this lately. Mainly OKRs around developer experience. As an objective that's an easy one to come up with plenty of key results that actually tie in to refactoring, retiring old systems and upgrading existing ones.

Currently where I work we have a fixed allocation % of dev/test time for technical improvements. In practice it works really well if you approach it strategically and use it to achieve things over the long term. Things you can just chip away at that steadily improve the health of the code, architecture and product. I find if it's used on a purely operational basis where you just ad-hoc improve stuff you find then it's not super effective, but it's pretty solid when the team is onboard with leveraging it to achieve better and better DX over the long term. Otherwise "refactoring" barely ever gets done and products slowly suffer bit-rot.

I think OKRs are a perfect fit for such a scenario. We don't use them, but I kinda use them internally as part of my mental model.


Should not be a tough sell. Refactoring, from an OKR perspective, means you are protecting the ability to ship. Which means that there are tacit implications to OKRs. Nobody would want you to prioritize some rando OKR that would cause code quality to drop to the point that it impacts shipping (because now there are, say, bugs that shut off entire functional areas or something stupid like that). OKRs have built-in assumptions and these tend to be around protecting existential abilities of the company, like shipping code.


This is an interesting point. I wonder if it might be easier at larger companies, since you can use metrics like "we are receiving fewer complaints that our system is hard to integrate with or understand".

You might improve on some measure of "velocity", but on a small dev team that's so noisy as to be nearly meaningless. I guess you could track employee frustration as a primary target.


OKR's are mostly done with things that move the needle with respect to money or user engagement/metrics.

That more or less mandates building new products, features and services.

Refactoring, tests etc don't fit anywhere in the OKR model.


The way I have factored these and other type of tech debt is to factor it in as development efficiency.

Particularly for startups, as they grow, one engineering team objective is to remain efficient as the team is growing. The way to achieve that? By refactoring code and introducing automation (like CI, CD, etc).

This gets reflected in the cost of developer time per feature / project. Which can be represented as a KR in the. OKR framework.


Right, so you do it outside the model. We have a % of engineering capacity reserved for non-customer-facing work.

Unfortunately it is mostly spent responding to infrastructure / dependency churn, just treading water rather than getting ahead, but it helps. No one is going to criticize you for refactoring what needs refactoring, as long as you are also making progress on your OKRs.


OKR is not a comprehensive planning tool either. It doesn’t have to explain all the work that you do, directly.


In my experience, OKRs should be set, in part, by negotiation with the other teams who are interdependent with your own.


It still can become tricky as each team's OKR doesn't have the same weight.

It's not unusual that a team holds a requirement for several other teams, but that requirement is not as valued for them as their team mission. Negotiations are done assuming they accomplish everything planned, but of course in practice it doesn't go that way.


When I took over at my new lab, 40 FTEs, I implemented OKRs, just between my direct reports and I. It quickly became apparent who gets shit done and who doesn't. It also became apparent how overloaded some folks were. How little some understand their jobs. It helped me see what I needed to focus on and what I needed the lab to focus on.

I am also in a massive, MASSIVE, organization with literally acres of people dedicated to formal communication processes. I submit that OKRs were handy because they forced formal communication that revealed problems outside the official formal methods. It's a hack. And in the realm of formal comms, it's a particularly low-friction, low-cost-of-adoption hack.

10/10 will use again.


I had an opposite experience. OKRs were a complete waste of time. I was required to waste hours trying to come up with OKRs, none if which had anything to do with what my actual day to day work entailed. The OKRs were then promptly ignored so I could got back to getting shit done. At the end of the ORK term of course they'd been ignored. This had the effect of being demotivating because I was being asked to make up these OKRs "because at this company we have OKRs" but then having zero time and zero motivation to pursue them since there were people waiting for my real work to be finished.

I think discussing this requires getting clear on what OKRs are. At the company I was at they were broad personal goals set every 3, 6 or 12 months related generally to your career at the company. They were not related to any specific project or deadline.

An example might be "Do more public speaking". Where as generally I'd speak if the opportunity came up but I wouldn't seek out opportunities and AFAIK the OKR didn't make me seek more out. It was just something I wrote to get management to sign off that I had written some OKRs.


Sounds like the process was not done well at your company. I have my OKRs set up so that I spend almost all my time working on them. I do usually have one or two "personal OKRs" like the one you described, but most of them represent the team's core work: "put major feature X into production", "address P1 customer issues with 3 days", whatever.

It gives you a nice defense when some seemingly urgent but less important work comes along. "We could do that, but it would cost us this OKR which we agreed at all levels was one of the most important things for our team to do."


Why were you unwilling to lean into the process?

You could have said: these OKRs have to describe my actual work... here are my attempts to document what that means, and what to look for if I do a good job at that.

If they had pushed back on that, I think you would have a leg to stand on, but as described it sounds like you were just unwilling to engage.


Could you talk about introducing OKR into a large organisation where they didn't exist before - why did they become useful as opposed to things like Slack for example.

I'm genuinely struggling to understand the place of a construct like OKR in a world where Slack like tools exist.


OKR and Slack are completely different things?

Ones a chat app the other is a system of setting objectives and measuring against those objectives.


Not really - okr are a communication mechanism to communicate goals to a larger org in a distributed way.

Does it still make sense when the org is closer together due to slack


No, they are completely different things. Real time chat doesn't make people feel like they understand the overall goals of an organization. If someone says in Slack "we really need to get x done because the customers want it" and other people decide to talk about StackOverflow's hilarious April fools day joke for 5 minutes, some are going to miss that "we really need to get x done".

Your OKRs put that sort of thing in one place. The process to determine what the OKRs are ensure that everyone's on the same page about the objectives. People in an organization want to feel like they know what they're working towards. When they don't they get stressed out and unproductive.


I don't fully agree on this notion. Because implicitly doing this is also surfacing the gaps in achieving those targets.

Most okr are dependent on someone else - the Android app guy will not achieve their okr if the API guy is off doing something else. In that way, the whole aspect of goal setting and achieving that is probably more inefficient using an okr structure than thrashing it out on slack (and most likely in a dedicated channel called "planning")


If you only have six people, maybe :] But we're commenting on a post that's talking about the best time to introduce an OKR process. At my company we don't have an "API guy" and an "Android app guy", we have 30 or so different teams that my team interfaces with in a company with thousands of engineers. We can't just "thrash it out" on Slack at a company of this size.

You need to try and think about what things look like when the company is much, much larger.


OKRs aren’t supposed to directly depend on anyone outside the team or organization. I, personally, can’t fix my contracting office so I can’t set an OKR to improve production rates of our (few) physical products when contracting is always the bottleneck (supplier agreements for materials we need). If we used OKRs, with respect to that element we could only set OKRs that they could achieve on their own (quality, design updates, production rates when they do have material). A couple levels above me are the people who can directly impact the contracting people and set about improving their performance, though.

If you’re setting them and always missing them because of entities with other objectives, then your organization is obviously misaligned and needs to evaluate its goals.


OKRs can help solve this problem, if they're done well. Your OKR that depends on the API guy should derive from some common OKR that's owned by a guy up both your reporting chains. You can point at that to help make sure you get unblocked.


We use OKRs on a team level with the teams of 6-8 people being cross-functional. So the Android and the API persons would have the same OKRs working towards to.


OKRs are goals.

Slack is a chat app.

You can chat over slack about goals, but slack isn’t goals and goals aren’t a chat app.


It's more like helping set and monitor goals in a distributed way. It's a framework which to operate in.

You can be on slack or email or around the water-cooler all day long. I don't think that makes that much difference. It's not like as soon as you bring Slack into an organization all this structure emerges that helps you focus your efforts on what to do. Chat app or no chat app, people are always coordinating with each other to complete task-level work.

OKRs help you think first strategically (longer than 1 year) then tactically (between 1 month to 1 year) and then operationally (day to day, week to week). Chat apps don't do that for you.

They just aren't related.


I came in thinking I had one laser-focused mission: run the lab. But I had seen an org use OKRs and was interested in deploying the system. So I found a format I liked (Excel, objectives in bold, key results under each, quarters pushing out in columns to the right), explained the plan to them, invited them to come get me up to speed on things. Turned out there were over 70 discrete projects ongoing trying to implement or change things. Some from above, some initiated from below, some inherited from my predecessor who hadn't bothered to mention them to me. I had started 1:1s every week with each of 5-6 folks and got their input. Each person ended up with their own sheet. It took a while to sort out that some people had different names for the same things. There was clearly some gradient descent on my part. 70 projects is the lowest local minimum I could find. It took several epochs to find.

One thing I did, I don't know if this is part of OKRs anywhere else, is I really wanted numbers on all the KRs. So I added columns for the success value and data type: binary, ordinal, normalized, continuous, whatever it is. So at least I know what kind of math to apply. And it's a nice checksum that you know you've discussed that KR enough that you understand it enough to compute on: (binary, success=1, 0) or (ordinal, 37, 12) is super useful.

I let the 1:1s slide to every 2 weeks once we had a visual. Some prefer keeping the weekly. Some just stopped showing up. Hint: I've learned the 1:1 no-shows are also the ones who mail it in generally.

We have some federally mandated inspections coming so I dropped a big red line on that quarter. That's a nice visual to help say "it's gotta happen by then, or we're not doing it until after".

Laid out in front of me, some of these arrows clearly needed more wood. Some needed to be discarded. Some personnel changes were needed. Personnel changes are in progress.

Let me be very clear, my direct reports absolutely manipulate my behavior. That's their job. I've got plenty of clinical and research work. My role as the lab director is to synthesize what they're telling me and recruit resources for them, cut deals with other departments, go to my boss, whatever it takes.

And, when my boss's boss's boss put out word that my boss was getting promoted and they were looking for my new boss, I could have stepped into that breach, gunning for a promotion. But it's frankly a lot early for me to be looking for another promotion and I knew I had lots of work to deal with locally. So instead, I wrote a recommendation for a colleague and encouraged her to apply. And guess who my new boss is?

The upshot is my former colleague, now boss, is totally on my side. For her, this was a pheonix-rising sort of opportunity. She's on a mission from God to be the best boss ever. In our first meeting (remember, first impressions) I showed her our OKRs. She had never heard of OKRs, probably still doesn't know what that stands for, but she knew, instantly, that I'm tracking "all the stuff" like white on rice. You know you did it right when the boss sees your OKRs and completely changes the subject to start talking about problems in other areas. She's not worried about the lab's direction. She was, and is, plenty stressed with other departments she owns. But if I tell her we need help, she's very clear we need that help.

And we've got some of the things done. Which is nice. It's a little early to start computing annual success rate, but it won't be 0. I think most people hear about OKRs and setting the bar so high it makes you nervous, 60-70% success rates, etc and get worried. I think without OKRs,or something like that, a lot of people don't realize how many high bars they've set for themselves. 70 projects for 40 people is a little concerning. And I won't even tell you how many times I said "Hey, it's okay, let's dial this back a little" or "Can we buy more time on this? Who do I need to talk to?"


It might not be possible in your situation, but I wonder if there's some way to make broader categories that you can roll a few individual OKRs into. If nothing else, it would help you in reporting up: these are our three broad important objectives as a research group, and this is how we're tracking on them. Your subordinates would keep their granular OKRs, and you could think in terms of how they serve your general goals.

Also, we do use numbers on OKRs (at well-known modern tech giant). We do try to normalize to a real number [0,1] indicating roughly the probability of successfully meeting the goal by the end of quarter.

EDIT: to clarify, we do track something quantitative for each KR: error rate, response time, customer satisfaction score, whatever. Then we normalize to a probability so that we can scan the columns quickly and compare like with like.


As suspected it's useful only to the higher-ups and specifically to manage drones who need military discipline, same as scrum.

If it has an acronym it's useless to functioning contributors. The original purpose has disappeared with the name.


Ah, so are you a higher-up or a drone? I'm confused. Oh, no, you must be the "individual contributor". Right?

I'd say I have some pretty smart people working for me but large orgs are large because they have lots of work that needs doing and the big chunks are big enough the people doing the work tend to self-organize. Some work never stops (e.g. healthcare) so you have to establish shift work. Does that mean those people are drones? They might be young, they might enjoy a sense of belonging and camaraderie, even when things suck, and maybe they would benefit from some additional experiential learning of discipline, but does that mean they're drones? Did you consider the fact that my direct reports themselves have advanced degrees?

A word of advice ... the worst thing I could have done to you right now is ignore you and let your opinion fester. Maybe step back, have all the disdain for me you want, but I hope you at least read this far.


I implemented OKRs among my small team of 10 devs who are all pretty awesome rockstars and good contributors, but still early in their careers.

It turns out that OKRs made the employees happier, because it served as a “more abstract” form of feedback.

Before, they were receiving feedback, but it was too specific and narrow. Things like “this class can be refactored to X pattern” or “You missed a few unit tests covering X Y Z cases.”

But when we started tracking things like time to ship a new feature, bugs per implemented feature, hard deadlines for certain product releases, etc. it really helped to give some better feedback.

You could tell people who were writing code very fast to slow down in general, and write more stable, thought-out code.

For example, there was one dev who tended to have substantially more bugs in his code than the others, and slower ship times. It turns out that he was always being assigned to more complicated Tasks and Stories, and the poor guy was overworked to shit and he didn’t even realize it.

It’s got nothing to do with being a drone, and everything to do with not being able to improve until you measure.

Do you think Usain Bolt doesn’t have a stopwatch when he runs?


I'm curious why this is OKR specific. Shouldn't you be tracking those metrics anyways?


Recently our sales guys got some hefty clients, and have started to bring in serious money.

In order to keep up, we needed to turn development time from just a metric, into an OKR. Moreover, our clients sometimes put pressure on us to ship out X feature immediately, in response to a competitor. This means that sometimes the 10 devs are working on different features for different clients.

So, just like the book, “Measure what matters.” Don’t measure useless things like commits per dev or features shipped last quarter, the important thing for our division is development time.

Presumably when we get development times down, our OKRs will change to other things, like overall client satisfaction. But for now, the top concern is to meet sales volume, so our OKR is development time


Everybody's acting like OKRs are this enormous heavyweight process. Why? You just write down what you're planning on doing this quarter in a prioritized list. The purpose is to broadcast your view of your priorities, so interested parties can ask that they be adjusted. At the end of the quarter you look back and review what you actually accomplished. It's really not that big of a deal.

The process gets heavier at the level where OKRs are getting aggregated across multiple teams, or teams of teams, but that's a cost paid by managers, not by "drones," so I don't see why line employees should be concerned with it.

FWIW I spend about four to six hours a year thinking about OKRs, and that's assuming I'm paying attention for the entire OKR team meeting (fat chance).


> four to six hours a year thinking about OKRs

As a non-IC, I'd spend that much time per week on OKRs: writing the OKRs for a team involved planning out the next quarter's topics, which required aligning the team (with individual meetings & group ones) for the 2-3 weeks at the end of the quarter. Then drafting them, aligning with the department/VP's topics for the quarter, finishing them, getting feedback from other team leads, dry-run presentation for the dev manager meeting, then live run-through for the department all-hands, mid-point scoring updates, end-of-quarter scoring, back to the beginning


> The purpose is to broadcast your view of your priorities, so interested parties can ask that they be adjusted.

Concrete team stories are usually quite clear. OKRs force you to encode a bunch of them into objectives and key results. This produces utter nonsense if the duties and opportunities are generic or fragmented. Not every team can write a clear OKR which makes it obvious what's happening, especially if you servicing multiple departments.


Not at all. If you achieve your OKRs regularly, you can expect to get promoted. If not, you can expect not to get promoted. Being clear with it helps everybody. If you don't get promoted although you achieved most of your OKRs, you should run from that team / company.


,,OKRs present a heavy-weight answer to the problem. OKRs tend to require hours and maybe even days to determine what are the right goals and metrics. ''

If a team leader doesn't have a few days to decide on the strategy for the team for the next quarter, I have no idea what more important thing she is doing.

John Doerr wrote the great book about OKRs to explain how important they were in setting the strategy for Intel when it went out of the memory business and focused just on processors: achieving that in 1 quarter is only possible with very clear communication.

If it's not rare enough (quaterly or yearly) or not short enough (few items), it doesn't work, it's just a wish list. I have seen them working and not working (when the whole team knew that doing all the strech goals were impossible).


> John Doerr wrote the great book about OKRs…

The book, in case anyone else is curious: https://www.whatmatters.com/


> If a team leader doesn't have a few days to decide on the strategy for the team for the next quarter, I have no idea what more important thing she is doing

Typically, focusing the team on the strategy for the next 24 hours.


That sounds like leading by micromanaging...


Depends if the leader is in the weeds contributing or not.

My experience with quarterly goals at an early startup was that it made an artifact for what was most important at that given moment. Only later we're we able to prioritize a quarter ahead


For a startup an example objective would be an MVP launch and a key result would be an MVP being tried out by 10 people in a coffee shop, or getting 100 views from various blog posts. Something like this would set an expectation for the whole team (sales, marketing, engineering...) for the quarter. It shouldn't be complex, but it has to be something well thought out and serious.

I would get crazy if I would get a new strategy every 24 hours. The details of the execution can change, but the objective and the key results shouldn't change that often.


Oof yeah. Top-down "strategy" in 24-hour increments is a recipe for people feeling jerked around.


I assume you're referring to being consistent - keeping the team focused on the (predefined) strategy, not getting distracted and going off on tangents...

Seems pretty reasonable to me.


John Doerr's book is "Measure What Matters: How Google, Bono, and the Gates Foundation Rock the World with OKRs":

https://www.whatmatters.com/


Despite the name and intention, in my experience, OKRs is a prime candidate for cargo culting culture into an org and using process as a proxy for actual focus on outcomes.


Yes! That has been my experience at a (now long since failed) start-up.

We didn't communicate top-down, or cross-functionally very well. The executive team, with lots of support from HR, rolled out a really heavy, very process-centric OKR system and assumed that by announcing that Google uses OKRs and providing a portal that we'd start aligning.

It was an abject failure - not because (IMO) OKRs are a bad idea, but because the leadership teams made the mistake of confusing process for communication.

After the big announcement, it was mentioned only ONCE ever again. Seriously.

My team (of six) and I made use of it, but it was very difficult to align it with the goals of, say, our chief revenue officer; someone we met about three times in four years. We went back to using team & personal goals, creatively using our backlog and one-on-ones because there was no input, no updates and no communication about the OKRs.

The start-up failed because it wasted time chasing tactical low-value (tangential) deals instead of delivering against a viable, sustainable strategy. Great place to learn some hard lessons!

I'd use OKRs again for sure - both for their intended purpose, to help avoid the process-as-communication-proxy trap, and to gauge the nature of the organisation implementing them.


I agree. That’s the only type of use of OKRs I’ve ever seen or heard of, across several jobs and big, small, young and old companies.

People who get shit done do so in spite of OKRs or other formalisms like Agile. OKRs do not help orient work towards what matters, maintain accountability, alert management to schedule slippage, or unify team efforts. OKRs are cargo cult metrics for middle management to bend the ear of higher management to argue and gladhand for bonuses & promotions.


When you say Agile is a formalism, you mean Agile with a big-A? Like the very formal approaches to Scrum or SAFe?

Because outside of those, agile is actually the anti-formalism. It's a whole slew of tools, techniques, methods, and philosophies to choose from for your team and organization. Offices that formalize it are the same ones that'd formalize Waterfall or Vee model or Rational Unified Process or any other approach. And they'll suffer the same consequences regardless of which one they use.

Offices that understand it as a toolkit can actually get value from it (AKA, those who read and internalized the initial manifesto and mostly avoided the consultancies that came out after).


In theory I’d agree, but in practice agile is _always_ just a source of metrics so that middle managers can create Dutch Books out of project outcomes.

I read a good way of putting it once, “when everyone misuses a tool, beyond a certain point, it’s the tool’s fault.” I think this fits the realities of agile well.

Separately a lot of people will look at this or that pathological issue in agile and say that’s not “real” agile, ignoring that this is just a No True Scotsman fallacy by which you’d vacuously define agile as “all good things” or something.

The main pathology I’ve seen in agile (across half a dozen companies, large & small, with or without formal agile training, etc.) is that it is paradoxically highly inflexible. If certain research tasks need to be a 4-week deep dive that cannot be chopped up into separate tickets on a sprint cadence and cannot be time-boxed, there’s no way to handle it (this happens all the time on research teams).

If one team needs to work on a 3-week cycle all summer instead of the company-wide 2-week cycle, it can’t be done. If there is no way to estimate some points for a certain task, you still are forced to go through the motions and fabricate misleading numbers anyway.

Even the core agile manifesto has to be ignored sometimes. Sometimes sticking to the plan actually is more important than responding to change, and you have to turn down business or tell customers no even if it costs you.

Agile can be used well. It’s just exceedingly rare that it is, to such a degree that we need to admit something’s wrong with agile itself given how easy it is to subvert agile into a bad system.

At this point a lot of people reply by saying, well what alternative is there? I don’t like this because it presumes there has to exist some named alternative that doesn’t suffer agile’s limitations, but no such official method has to exist and that doesn’t mean agile should be used.

Instead, just use some mixture of common sense and planning based on the specific personnel working on the project, their preferences and styles, and randomly stealing things from agile or waterfall or whatever else on an as-needed bases. Don’t give it a name.


How does an OKR not orient and account for effort if each level’s objectives are the higher level’s key results? If my boss has an objective that will be met with measured output from my department, and my boss sets my OKRs with me, how can I escape making that output an objective and measuring contributions to it? What is your assertion specifically based on?


Higher level OKRs are generally left vague while being claimed to be unambiguous and precisely scoped.

That way it is a subjective matter as to whether the outcome was satisfied or not (so that success in an OKR system can remain political and not meritocratic).

This follows from upper levels of the hierarchy to lower levels, so that in the leaf nodes where KRs hit specific projects, it becomes an incredibly stressful negotiation between the pragmatic reality of what needs to get done and how much bandwidth there is vs a stack of forking paths of ambiguous outcomes so that managers want to hedge their bets on all possible outcomes.

If OKRs were treated more like prediction markets, and bonuses or rewards were tied directly to quantified outcomes, then it would be like what you say (and also like what OKRs are supposed to be, rather than what they pay lip service to).


The part that makes the most sense to me, and that I’ve experienced, is the stressful transition of management into measuring specific projects. However, I think it’s good if the OKRs force those conversations to happen.

And obviously if the organization is too dysfunctional, the OKR setting process cannot be honestly conducted and will never work. I don’t think the average organization is this bad.


In my experience, only very few companies use formalized processes like this in a remotely reasonable way, surely less than 5% of companies. In the rest, which often include name brand, successful companies, the situation is just pure dysfunction.


Good people make any process look good.

No matter how amazing the process is, without good people, you’ll fail.

I like OKRs, but I benefit from being with smart people.


One way to think about this stuff is to ask : "Do lawyers and accountants and doctors and playwrights and mathematicians have these kind of schemes?". Answer: probably not. They're handy for some jobs but not others. FWIW this kind of oddness has been going on since I entered the workforce in 1986. BS5750, ISO9001, TQM, KPI, and on. Part of the landscape and not really worth getting too excited about.


Good point. Thinking about this more, I’m wondering if OKRs are just s byproduct of abstraction. Management needs to feel like they have a way to have “traction” on things they don’t actually have any real understanding of - eg the code. Similar to how devs think java is what the processor actually does.


We just got done with trying OKRs for a quarter at my company. It was overall a really positive experience. We had a small team of people working to push out a new product. At the beginning of the quarter, we defined what success looked like for that new product and came up with the key indicators for it. As a developer, it was really fulfilling to take time to really think through what it would take to build out a truly quality product. I spent a lot more time thinking through the things I normally wouldn't put much effort into. We built tools to help our support team understand the new product. We put together documentation. We took time to think about how to train the company on the new product. We wrote automated tests to make sure that we could sell the new product as a stable solution. Everything was focused on our overall objective to deliver a quality product. Our team had people from development, marketing, sales, product, and quality assurance. It was great to see a cross-department team unified to get something amazing done. Just adding a data point that there are ways to make it work well.


One way in which I've seen OKRs used effectively is as a defense against the type of middle or upper manager who is constantly coming up with new ideas or tasks. But what if we did $shiny_thing!? Sorry, that's not in our OKRs this quarter, let's have a meeting to plan for next quarter.


It acts as a defense in both directions, too, since the manager’s boss will want the manager to stick to objectives that fit with the manager’s boss’ OKRs.


OKR and other management frameworks work well in environments that would perform well even without them (self-organizing, enthusiastic, competent, non-threating and driven employees). Having just the structure in place wouldn't guarantee much. Google's success using OKRs was a side effect of their past employee composition and the fact it got least in the way of them to do great things.


This is good. Ive found communicating OKRs steadily on a cadence (weekly) as well as show the team the progress towards a goal at an all hands to be effective. Its rare for someone to miss all the all hands meetings.

Commms, progress, priorities, from a company down to tactical execution level are easy but require discipline.


People moan on and on about OKRs, but I am a developer and I honestly like them.

Personally: I do not resent that management wants to know what I am up to. And it is helpful for me to be working on projects in such a way that I understand their greater context.


They also won’t fix leadership’s lack of vision, ability to focus, and courage.


It's strange because the company I work for just started implementing OKRs last week. It actually happens creepily often when I do something and then some related article pops up on HN a some weeks later. I do read HN everyday though but still.

I guess it shows that we severely lack creativity and are all being manipulated.

Personally I don't think OKRs add any value because they force us to focus on measurable results at the expense of unmeasurable results which might actually be more important.


>It actually happens creepily often when I do something and then some related article pops up on HN a some weeks later.

https://en.wikipedia.org/wiki/Baader–Meinhof_effect


What is an OKR? Whatever it is, clearly didn't help the author to communicate clearly. ;)


Here's a step to fix your communication issue. Don't assume people know what OKR is, and explain it briefly at the start of your article about OKRs.


I have never been involved in an okr system that wasn’t an utterly pointless waste of time that no one but hr and back office people took seriously.


My experience is similar. We all know what work needs to be done, but instead of just doing it, we end up having to come up with how it aligns to OKRs or KPIs or whatever they’re calling them that quarter.


There is a lot of toxic cargo cult OKR / Agile / all other forms of management.

The point of OKRs is to make sure you are actually working on relevant projects and not burning time on something nobody actually wants or needs. Happens much more than you would think.

No amount of process is going to fix brain-dead management.


What was a common theme among all the systems that you think contributed to their pointlessness?


We do quarterly OKRs. Quarterly deadlines are completely arbitrary and my actual deadlines aren't tied to the quarters.

So I (and everyone around me) don't take them seriously. We take the actual deadlines seriously. The only thing OKRs seem to do is provide direction to the higher ups about what my team is up to.


Yes so you have no actual incentive to focus on OKRs. You're doing the work required to keep the company running but the quarter's OKR goals only gets attention if you think it will help your actual job. Is that what you mean?


I don't think anyone expects OKRs to do more than that. Direction to the higher-ups is a valuable thing, and it's not trivial to reliably get it.


They were almost entirely unconnected with the work I was actually doing, and nobody cared them until 3 days before they were due.


Similar experience. We (the devs) quickly realized we'd need a ton of time to create useful metrics for our dev teams and that we'd need a shit-load of historical data we didn't have to give them any context—lots of small- to medium-size projects of many different types, hard to reason about one based on another except with a mountain of data from all of them to treat in aggregate. And it might still just be tea-leaf reading. Probably would be, in fact. And of course we didn't have the time to do that. So we found ways to tie social media marketing stuff (views are the easiest thing to measure!) to higher-level OKRs and did that instead, usually rushing to get it done right before the deadline.

It was not a good use of software developer salaries. An initiative to at least start collecting the data we needed so it might be useful in 2-3 years could have been, but some OKRized version of "start collecting data to use in OKRs 12 quarters from now" wasn't something anyone would get behind and was pretty much impossible to tie to the higher-level OKRs that were being set.

Whole thing was a game to find something easy to measure that could be made to look useful, whether it was or not. It wasn't some huge disaster or anything and I can see how maybe in exactly the right organization it might work, but there's no way it was the right place for that company to spend their money.


thank you, I felt like I was in the twilight zone at the last company I worked out that used these

I started working from coffee shops until they just dropped the OKR system and every manager wasting the whole company's time on it had "moved on for personal reasons"

I had forgotten about this repressed nightmare, I'll remember that next time I am fawning over the hip location and catered meals


Using email efficiently requires training and discipline, being able to:

-- at a quick glance, quickly "triage" big lists of unread messages, not be put off by several hundreds messages and grab the phone

-- reply shortly and to the point (being native to the culture and language helps but is not a given anymore)

-- master art of attracting attention of very busy people of various kinds, to get them open at least the email and read some

-- not to turn email into slack in disguise, check if the question already has answer available somewhere else, and not become nervous if the other side doesn't reply in a second

-- master email client sufficiently to configure pre-sorting messages, and trust it's action

I'd say environment where email works as a reliable channel for top-importance communication (such as goals and prios), probably already doesn't need OKRs.


Jacob Kaplan-Moss had some practical advice in response to this post which I found interesting:

https://jacobian.org/2019/apr/1/talk-about-performance/


The point he makes is that OKRs (alone) aren't going to save you. The same way any technology or trend (Scrum, Docker, latest JS library...) isn't going to fix your problems. You must change your process/culture along with it.

We took on OKRs a while back, and have created a process around it - https://info.container-solutions.com/hermes-container-soluti... (Apologies for the gated content)


OKRs were always waved around as some magic secret sauce that's going to revolutionize the free puppy distribution market. In practice they ended up just being another reason why directors would bother us about the "error burn rate" of services that only have errors when the upstream services they rely on are having issues; with no way to programmatically remove that from the "error burn rate". I hope this helped management discover how much the systems of that job all rely on eachother.


I feel like the article misses a powerful aspect of OKRs–the ability for people to be creative, but in a cohesive manner that results in focus.

I noticed that my team hardly did any of the work we planned for last quarter, but we were still able to achieve many of our key results through (different) work.

In an organization where you don't have the power to influence your own work, OKRs provide more context than what will be used, so people may feel like it's a waste of time.


Goals are great, sure. And yes these are just goals with lots of wrapping.

But I am tired of having the latest management book wisdom foisted upon me by product and management. Why do engineers always need to be dragged into this stuff? Why can't they be content to have their endless meetings and share their "decks" alone? As Christopher Hitchens was fond of saying about religion: "just leave me out of it."

/rant


"OKRs tend to require hours and maybe even days to determine what are the right goals and metrics."

The author implies that this is wasted time. It is not.

Figuring out why you do what you do and how you measure the success of that in a way that leads you to the right result IS hard but extremely valuable work. It forces you to deeply understand the problem you're solving and the challenges you face.


In my experience OKRs are the latest fashion of BigCo MBA bullshit -- same thing with a new name and with equally bullshit results.


I like having a static quarterly document, in a super easy to access place. It must be one page max and lay out the "theme" of the quarter and supporting initiatives.

Every team weekly summary email has a link to this document in the footer. Tracker — or whatever you use — has epics et al aligned with this document.


OKRs can be really great but only if they are used properly and are fully integrated into the process from top to bottom.

Just relabelling old KPIs as OKRs is pointless (just, why would you do this?)

Just having OKRs at a division or company level is pointless (no buy-in or engagement from regular employees)


What is OKR?


Example of a more traditional KPI (Key Performance Indicator): System downtime must be below 0.01% every quarter.

Example of an OKR (Objectives and Key Results): By the end of the quarter, implement a new fast-response process to reduce issue resolution time by 25%


That isn’t an effective OKR. Try this:

Objective: system downtime must be below .01% every quarter.

KR: implement new fast-response process (binary measurement.)

KR: reduce mean issue resolution time from 60 to 45 minutes (linear measurement of progress with .75 set to 15 minutes and 1.0 set to 20 minutes of reduction)

In this case, your objective is likely a key result of our boss, so it makes sense that it’s the same measurement. You’re being measured on something that matters. One of your key results is purely based on effort. The other might have a bit in it that’s out of your control, but also lets you benefit from other ways you and your team find to achieve the resolution time reduction.


At least the way google did it your first example could totally be an okr. Not every okr is some deliverable.


Almost like KPIs are a measurement of state, and OKRs are a measure of change.


OKRs can be great, or they can result in releasing software that isn't ready yet (see Google's Buzz for context).

My suggestion: let your team members craft goals, and help shape those goals into deliverables that help them and the company.


I worked at a startup for a bit that started OKRs and I found experience super enjoyable; to the point where working in a more relaxed environment(for lack of a better description) is kinda hard these days.

The two things people and teams struggled the most with though, IMHO, is:

A.) Coming up with a good OKR that met longer term, higher level company priorities that intersected your work.

B.) Coming up with a good ORK in terms of measurable key results.

It can be tough and time consuming to get the measurements in place necessary to gauge the results, or even make a case for the OKR in the first place(incidentally this is the dirty secret of SRE IMHO). We would pro-actively get data collection in place on occasion for reasons that included helping to launch and score future OKRs.


True. I've worked at several companies where OKRs were implemented and I find it valuable. Everyone does them differently. People still have trouble understanding and implementing them.

To address some of the pain points with OKRs I started building my own saas about a year ago https://simpleokr.com


> (incidentally this is the dirty secret of SRE IMHO)

I think I have a hunch, but what exactly do you mean by this?


Agreed. There is a peace that comes with knowing what you are working on is aligned with the company, and that management has good visibility into how you are doing.

And if you take your job seriously, it is really not that hard to keep up with.


John Doerr's book references Intel and Google heavily to gain credibility for the OKR methodology. Can someone from either of those organizations chime in with their experience?


OKRs aren't supposed to fix communication. They are supposed to guide your team to make day to day decisions that support the agreed goal.


One issue I see... People dont actually read their email

https://blog.prepp.io/news-stories/youre-not-imagining-it-yo...


True that!


true that!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: