When "OKRs Don't Work" -- Thinkydoers Ep 10

okrs thinkydoers podcast Jun 12, 2023

Fun fact: While I am an OKR activist, I'm not an OKR dogmatist.

If OKRs "aren't working" in your organization, I'll be the first to support your decision to change course to an approach that works better for your organization.

Frequently, though, I see organizations reach a stage where they're frustrated with their OKR implementation -- and the time of year I see the "OKRs aren't working" phase kick in is at mid-year, when we're looking at our annual company strategic plans or OKRs that may have been set six or seven months ago and questioning: "What's the value of OKRs, if they've become this off course?"

I'm having these conversations with a number of clients right now, so if you're a client and you think this episode is about you, what you're experiencing may be part of what inspired this episode, but this episode is not about you -- it's about a lot of you. 😂

Many teams reach this stage and then overfocus on what's not working in our OKR operations: with tracking, reporting, and our OKR software or systems. What's most valuable when your organization reaches this stage is to take advantage of having six months worth of data for us to assess our progress thanks to our Objectives and Key Results, AND, examine and apply our half-year of learnings during our mid-year reset, to improve our Objectives and Key Results, our OKR behaviors, AND our OKR operations instead of abandoning them.

If the job we've hired OKRs to do is to increase our clarity and objectivity about what shared progress and success mean -- which is the highest and best use of OKRs -- then when organizations reach the "OKRs aren't working" phase, we have a chance to take a good hard look at our OKRs themselves, and, our behaviors. One of the most common issues I see in this phase is that teams are not yet letting OKRs do that job they were hired to do: they may still be stuck in task & activity quicksand, and have not yet taken the step into aligning on actual objective measures of progress and success. If that's the case, it's not that your OKRs aren't working: it's that you're not yet using OKRs!

This episode is timely: many of us are navigating our mid-year resets and struggling with frustration; here, I share a few areas of focus you can consider to infuse your OKR reset with new energy and optimism, to help you refresh your Objectives and Key Results to help your organization finish the year strong!

Key Points From This Episode:

  • What happens in the “OKRs aren't working” stage.
  • When organizations typically reach this stage and why.
  • The root of skepticism that presents itself when it seems OKRs aren't working.
  • What should be addressed within the organization when the “OKRs aren’t working”.
  • Why abandoning OKRs altogether isn’t the solution, and what to do instead.
  • Assessing the opportunity presented by the “OKRs aren’t working” stage.
  • The most frequent contributors to the “OKRs aren't working” problem.
  • Why it’s not encouraged to only set activity-based goals within our individual control.
  • Reasons why OKRs should be set as shared outcomes, not activity plans.
  • Which measures should be included in an organization’s key results.
  • What can be learned from OKR non-achievement.
  • The importance of a tightly structured and organized tracking system for mature OKR implementation.
  • If your OKRs aren’t working, reach out!


“Those “OKRs aren't working” indicator questions come in a variety of forms, but they typically boil down to a single root of skepticism.” — @saralobkovich [0:03:20]

“We don't solve the issues we're experiencing by focusing exclusively on the operations of our OKR program.” — @saralobkovich [0:05:58]

“The place where I see the lowest progress on key results when I examine large systems of key results is in the activity KRs.” — @saralobkovich [0:12:40]

“What we planned and executed doesn't always actually connect to achieving our most important outcomes. We may do all the things and still miss on our most important measure.” — @saralobkovich [0:13:26]

“Only objectively measurable progress indicators and measures of success should be in our key results.” — @saralobkovich [0:14:07]

“We need to do the effort of making sure that our key results reflect our most important progress measures, and our most important outcome measures instead of activities.”  — @saralobkovich [0:16:10]

“When our OKRs aren't working, that means we've gained important data and information we may not have otherwise, that we can use to create a clearer and more coherent path forward.” — @saralobkovich [0:22:35]

Links Mentioned in Today’s Episode:


Red Currant Collective

Sign up for RedCurrent’s newsletter

Red Currant Collective on Instagram


Sign up for all info on RCC launches!

Full Episode Transcript:





[00:00:04] SL: Welcome to the ThinkyDoers Podcast. ThinkyDoers are those of us drawn to deep work, where thinking is working, but we don't stop there. We're compelled to move the work from insight to idea, through the messy middle to find courage and confidence to put our thoughts into action. I'm Sara Lobkovich and I'm a ThinkyDoer. I'm here to help others find more satisfaction, less frustration, less friction, and more flow in our work. My mission is to help change makers like you transform our workplaces and world. Let's get started.




[0:00:49] SL: There is a stage of maturity in most long-term OKR deployments that I find frustrating, amusing, and rewarding all at the same time. I see it most often when we've worked with OKRs in an organization at multiple levels for more than a few quarters. Where we've hit a wall in terms of OKR learning and uptake in specific parts of the organization. To put it really simply, I call it the OKRs aren't working stage. In this stage, our early adopters might be off and running and making some performance progress. But we have slower adopters who outright refuse to enunciate measurable outcomes, whose focus is entirely on the activities they have planned. Sometimes for really good reasons, by the way.


In this stage, the OKR program team is usually still pulling the organization through their OKR updates each month and quarter. In some parts of the organization, we're hearing grumbling about having to track OKRs that are different than the team's actual priorities and mandatories. Our O1 leaders may be seeing progress in terms of having more performance data than they did before. But also might be frustrated that OKR reviews still contain a large volume of status reporting and spin, instead of a clear focus on progress learnings and blockers.


Inevitably, it seems to be what we reach mid-year, with half the year gone, and the pressure now mounting to achieve what we must this year, which sometimes looks really different now than it did six months ago when we initially wrote our annual OKRs for the company. When we sit down and look at those OKRs, many of which were written quite a while ago, without the benefit of the six months of information and data that we've gathered since. Sometimes we look at them and think, "Wow, a lot has changed in the last six months, and these aren't necessarily pointing us in the right direction anymore.


In organizations that are warmer to and more invested in OKRs, there are some gentler key questions that tend to pop up, that signal when we're at the stage. But in organizations that hold skepticism about the OKR implementation, and next logical thought might be OKRs just aren't working. Those “OKRs aren't working” indicator questions come in a variety of forms, but they typically boil down to a single root of skepticism. How are our OKRs so far off from where we need them to be? Other forms of that question you may hear or frankly be thinking yourself, how can our OKRs be so far off after we've put in this much effort to create and track them? Why does our OKR work lead does so many KRs that are in the green when our most important outcomes are in the red?


We also see the question of, what's really most important to achieve is some critical outcome measure that we're now able to enunciate, but that's not in our OKRs. How could our OKRs be missing something that important? I mean, the quick answer is we have six months more data now than we did six months ago. Then, we also hear the question of, why are people still confused about how their work matters when we've put so much effort into creating OKRs?


These are all good valid questions that have answers rooted in what we can learn and in behavior changes that may need to occur within the organization. But when these questions arise, the attention typically turns to our OKR operations, and our OKR core team is put on the hot seat. We've invested all this time and effort, why isn't this working better? At this point, everyone in the organization has worked with OKRs long enough to now believe that they're the expert. Ideas and beliefs about what's wrong with our OKR process and how it should be improved become abundant.


Often, a new consultant is called or an OKR platform engaged. The focus zeroes in on the “how” of OKR operationalization, which frankly rarely actually improves the situation. Sometimes, that's just a big distraction about how we're working instead of getting at the real root causes, and behavioral issues, and patterns, and habits that are holding us back. In some cases, organizations abandon OKRs entirely, sometimes with nothing else to take their place in terms of the job that they were originally hired to do. That rush to scrutinize our operational approach, and at worst abandonment of OKRs really breaks my heart. That's not to say that our ops can't improve. Often, our ops can, but we don't solve the issues we're experiencing by focusing exclusively on the operations of our OKR program.




Our operations can always be often improved, but these questions point to issues with our organization's behavior, beliefs, and operating roles. Not only our OKR operations. We certainly don't solve the issues that we're seeing here by abandoning OKRs. We can solve these issues by getting curious instead of judgmental when we reach the “OKRs aren't working” stage. Because when the organizations can see the opportunity presented by the stage, they're on the precipice of graduating into full OKR maturity with an approach that works for their business.


The stage points to several key issues, including and beyond our ops. Including, first, lack of trust, courage, and safety to make the shift from activities we control to important shared outcomes that we influence. A second issue is there's often lack of accountability for leaders and teams that fail to enunciate objectively measurable key results. Third, we often see these issues arise in organizations that are applying evaluative and judging mindset, instead of a learning mindset to our OKRs. Sometimes we see this dynamic when we have complex or burdensome OKR operations that are overly focused on providing value for our senior-most leader, but that might be a time suck with little value provided to the rest of the organization.


Let's break those key issues down one by one. First, one of the most frequent contributors to the OKRs aren't working problem is in organizations that lack trust, courage, and safety to actually shift from activity to outcome thinking. As an OKR coach, the lion's share of my time is spent gently and directly coaching clients on the difference between activities and outcomes, on the risks of estimating progress objectively versus tracking objectively measurable progress. And on creative ways to quantify progress, and success even in hard-to-measure or difficult-to-instrument areas of the business.


Despite that gentle and direct coaching, I can't force clients to create measurable key results. I give them feedback when they're identifying activities instead of outcomes, and I work hard to get them to move those activities out of the key results so that the term of art of key result maintains its important meaning as a quantifiable objective measure of progress or success. But ultimately, those OKRs that clients are looking at and saying, that's not working are the OKRs that they wrote themselves. Many people, even after a few quarters of this remain uncomfortable or outright blocked, trying to enunciate objectively measurable key results. There are many good or at least strongly conditioned reasons for that resistance. That's a separate blog post or podcast episode all on its own.


But most frequently, I see the attachment to setting activity-based measures remain strongest among people who for whatever reason just aren't comfortable setting goals they don't individually control. But let's play that out and think about a world where we only set goals that we individually control and that we know we'll be able to achieve. If that's how we operate, then each person is in their own little appearance of control silo. We're setting ourselves up to preserve the status quo not to grow and innovate. We're also then turning away from collaboration and the lift of the whole being greater than the sum of its parts, in favor of focusing individually on activity that keeps us busy but may not actually be helping us achieve what's most important.




When we set activity-based goals based on what's within our individual control, we wind up with huge gaps in clarity around what's ultimately most important to achieve together, how we'll know with confidence objectively that we're making progress on our most important outcomes, and of the many activities we have planned, which ones really must be prioritized now to help keep our most important outcomes on track. Including our long-term strategic outcomes, which may not yet be reflected in our OKRs.


This dynamic is not an OKR problem. This shockingly frequent dynamic is a cultural problem that can only be corrected by insisting on OKRs being set as actual shared outcomes, not activity plans. A second common contributing factor to this stage of maturity that I see is lack of accountability for leaders who fail to make that shift to measurable key results. Resistance to setting objectively measurable or quantifiable key results happens for a lot of reasons, including some very good ones. Some parts of the business are more difficult to instrument, objective measurement around. For example, sales and marketing departments are typically rich in instrumented analytics, product development, and engineering departments typically are not.


Sometimes, teams that have experienced negative consequences in the past for failing to achieve goals might be gun-shy about signing up for goals that they feel are not completely within their control. This dynamic can create a stalemate in the organization with the more instrumented departments setting goals around what they can measure, not necessarily the most important outcomes to achieve improvement on, and the less instrumented are more gun-shy departments setting goals around what they believe they can do, in terms of activity that's within their control.


Let's notice that neither of those cohorts are then actually setting helpful objectives and key results. The former are focusing on what's possible to measure, not what's most important to actually achieve. The latter are stuck in activity quicksand. Where what happens all too often is that circumstances change and key results become outdated, or seed no actual progress made. The place where I see the lowest progress on key results when I examine large systems of key results is in the activity KRs. Where in some organizations, I've seen 0% progress across their entire set of activity-based K Rs, or activities that are called KRs.


At best, those activities are completed and we get a sense of satisfaction for completing what we planned, and checking them off the list. But remember that question that was posed earlier of, why does our OKR method lead to so many key results that are in the green when our most important outcomes are in the red? We hear that question when what we planned and executed doesn't always actually connect to achieving our most important outcomes. We may do all the things and still miss on our most important measure. Both of those scenarios can be avoided by focusing our key result creation on how we'll know quantifiably that we're making progress on or have achieved our most important outcomes, and not letting leaders or people list key results that don't describe our most important progress measures and outcomes.


In other words, not letting people put activities in our key results. Only objectively measurable progress indicators and measures of success should be in our key results. And leaders who can't enunciate measurable progress indicators or measures of success for their and their team's work. I mean, we've really got to examine that. If all you can do is identify the activities that need to be done and not the results that they're driving, then that merits careful examination. This kind of scrutiny is applied all the time and workshops with staff deep in the organization where they get stuck in activity quicksand when they're ideating key results.


As OKR coaches, we learn to ask the question, if you do that activity, and there's no change, there's no outcome, there's no shift, then was that activity a success? Almost always, the answer is, "No, of course not." Sometimes, the staffer can come up with some obvious outcome that is so obvious that they don't even think about it right away. But we apply that kind of scrutiny to key results in our OKR workshops deeper in the organization. And too frequently, I see senior leaders having good explanations for why they can't write measurable key results be accepted. But what that leads to is a set of OKRs that includes activity key results at the top level of the organization, which doesn't set a good example for the rest of the org. We've already heard some of the issues that activity KRs can create that slow down our OKR maturity.




When we get to that OKRs aren't working stage, I challenge you to take a look. I'll bet when you look at that set of KRs, you're going to see that a lot of what's not working is that you've got activity-based key results, and we need to do the effort of making sure that our key results reflect our most important progress metrics, or progress measures, and our most important outcome measures instead of activities.


Third, “OKRs aren't working” itself is an evaluative statement. It's a declaration. And when our OKR practices are overly evaluative or judgmental, when there are negative consequences for a key result coming in in the red for example, set goals we know we can achieve. Keep them in the green at all times. And that way, we'll skate by without getting in trouble. But that's a whole lot like the, we can only set goals we control fallacy.


For all the reasons discussed above, when we only set goals we know we can safely achieve, we tend to stay plenty busy with activities that help us feel productive. We might not be achieving our actual outcomes of primary importance. Instead of furrowing our brows and judging OKR attainment harshly, which encourages that in my control-based key result creation, let's relax those forehead muscles and get curious. What can we learn from our OKR non-achievement that might help us improve in the future? What new information do we have today that helps us reset our course that we didn't have six months ago when we did our best to write these goals?


If we didn't write the right key results when we set these goals three or six months ago, what different key results could we experiment with next to see if we can find data that actually helps us know we're achieving or making progress? Because remember, there is no perfect or right key result, there are only our best guesses at what may be possible to achieve in our next time cycle that we'll aim for and stretch for. If we come up short, we'll learn from it.


Last, when our OKRs aren't working, we're often in a situation where our OKR operations might be designed for the L1's benefit or our senior-most leaders' benefit, but that might not be providing widespread value. We might have OKR ops that are too high of overhead. But often, it's less that our OKR rhythms are too high of overhead, and more often it's that our OKR investment of time and rhythms just aren't providing the value that we need them to in terms of helping us run the business and make our decisions. Often, much of OKR tracking is designed to benefit that senior-most leader by creating and organizing systems of aligned goals that roll up to a dashboard or tracker that lets that senior-most leader see how their business is performing on our most important measures of progress and success.


In practice, that often means a lot of labor and work is being done for the benefit of our senior-most leaders' confidence in the information they're receiving. But the rest of the organization is experiencing and seeing the labor, not the benefits of all that work. What's important about a mature OKR implementation is not that we have a tightly structured and organized tracking system. What's important is that our effort and learning is invested in making sure that the senior-most leader is clearly communicating their expectations and the measures of progress and success that are most important for the organization to achieve. Because that's how we ultimately achieve what our OKRs are designed to help us do.




Then, it's important that people in the organization are able to understand and align with their supervisors and colleagues, and cross-functional partners about where and how they're responsible for supporting those measures of progress and success. I would forecast to see higher performance in a situation where the seniormost leader or L1 OKRs communicate clear, quantified expectations of what progress and success actually mean, without any tracking of OKRs beyond L1. With only people self-organizing and working to align with their leaders and colleagues to decide which of those important progress and success measures, they can contribute to improvement on. I would rather see that and I would expect higher performance in that kind of environment, than in an environment with a perfectly instrumented multiple-layer tracking system with poorly crafted OKRs that aren't actually objectively measurable, and that create a pretty dashboard for our leader, but don't actually help us know, clearly, what must be achieved or how our work matters.


I have seen a lot of those implementations where all or most of the energy goes to the tracking and the accountability measures. With little focus going to the creation of actual aspirational and stretch OKRs that help us understand what's most important to achieve, and what our most important progress is to be made. What are the big takeaways here that you can apply in your own mid-year reset?


First, if you're in the OKRs aren't working danger zone, before you check the progress you and your OKR team have worked so hard to implement. Remind yourself that OKRs not working is a feature, not a bug. When our OKRs aren't working, we can take a good hard look where in the organization we haven't yet identified progress and success measures that help us run the business. OKRs that aren't working help us narrow our focus. We don't have to spend time resetting the ones that are. We can laser focus on the ones that aren't or where the organization's needs aren't being met. And we can get honest about where our behavior and condition patterns are actually what's not working, where it's not just our OKRs.


When our OKRs aren't working, that means we've gained important data and information we may not have otherwise, that we can use to create a clearer and more coherent path forward. Most importantly, when our OKRs aren't working, it means that your organization is one step closer to unlocking the next phase of OKR maturity. When you're developing objectives and key results that actually help the organization operate with more clarity, where we're able to be curious and learning-focused about how we can all improve and achieve.


If your OKRs aren't working, I would love to hear from you, shoot me a note. I'd love to hear what's not working, and I might hit you up to be a guest on a future episode to talk through the challenges you're having and potential approaches to write the course. I can also reply with some coaching for the situations that listeners bring up in future episodes. So send over your scenarios, what's not working, what are you struggling with, what are the challenges that you're running into. Now is an especially good time to tackle the “OKRs aren't working” problem in earnest. Because we've still got half of 2023 left for you to write the course and achieve what's most important for your business this year.




[0:24:02] SL: All right, friends. That's it for today. Thank you for joining and listening. I really can’t wait to hear from you about what in that intro resonated, where you got stuck or confused, and remember that's always on me, not you, so I would love hear your feedback. Don’t forget to pop over to findrc.co/waitlist, if you want the one e-mail that includes everything that is about to launch around here so you can hear about everything all at once.


If there's anything you have questions about, you can find me at Sara Lobkovich pretty much everywhere. I'm pretty sure I'm the only one. It's S-A-R-A-L-O-B-K-O-V-I-C-H. No, nothing here is easy to spell. I’m sorry. I'd be thrilled to have you as an email subscriber for infrequent more formal, just business messages. You can subscribe at findrc.co/subscribe. I also have a more personal list that takes side trails into topics around well-being, mental and emotional health, and my motorcycle racing life, and other serendipity at saralobkovich.com.


You'll find a shortcut to the show notes for today's episode via thinkydoers.com. You’re always invited to drop me an email. The easiest one to spell is [email protected]. If you've got other ThinkyDoers in your work world, please pass this episode along. We really appreciate your referrals, your mentions, your shares, and your reviews. Thank you for tuning in today and I look forward to hearing the questions that this prompts for you.