Yeah, this wasn't at all what I thought it would be about, which is the difficulty (impossibility?) of measuring developer impact against key results.
When you're a salesperson, you don't have to agonize over how to illustrate that you've "Decreased Customer Churn Rate by 10%". When you're an assembly line worker, you don't have to find a way to figure out how you contributed to "produce 15% more widgets". You either accomplish these things or you don't - they are literally descriptions of your performance in your job. Either way, nobody cares how you did (or didn't) do it.
When you're a developer, how are you supposed to show that you've helped "increase customer value from $N to $N+5"? Because "shipped the new version of the message queuing system" is not in anyone's list of key results.
OKRs feel like "organized SMART goals", and so I have the same criticism of them as I do of SMART goals: they're just another way of conceptualizing goals around things that are already easily and directly measured. No one has made any progress in quantifying the contributions of roles that don't have direct percentage impacts on dollars earned or dollars saved.
If that's what your OKRs look like as a developer, then your company is doing them very wrong. They're supposed to be hierarchical, becoming more concrete as you go down the line. Your examples sound like top-level OKRs; mine are usually something like "internationalize feature x, launch in Y locales" or "Implement feature Z, measure effect on meric A".
Which is not to say the OKR system doesn't still have issues, they just look more like the ones discussed in the OP.
Yep, this would be good feedback up the chain. People above you should be working on breaking the company level goals into team and role specific ones.
Edit: just to note that I have some other criticisms of OKRs, but having them be way too high level, broad, and not actionable should not be the problem.
Please share your criticism. I would be happy to get as much perspective on the topic as possible.
The problem I encountered was having, or the notion of wanting, a product roadmap which is produced from stakeholder, C-level, product- and IT team input parallel to OKRs. I feel this is an anti pattern. You have OKRs and they make your quarterly roadmap or you don't apply OKRs at all to the producing team.
That would be nice. My intuition is that this doesn't happen because the person in the hierarchy who translates "improve customer value by x%" into "internationalize feature x, launch in Y locales" is effectively taking responsibility for showing that the latter impacts the former, which is the problem I find to be intractable.
If you’re not making some effort to measure the value and impact of what you’re doing, then how do you know if it was the right thing to allocate your effort on? Presumably the new message queueing system had some effect on customer experience or developer velocity - can you try to measure that?
It doesn’t have to be perfect, but most teams would have a lot more impact if they spent some of their time getting at least very rough estimates of the impact of their current and future projects.
As a concrete example here you should be able to translate a new queueing system into something that has business impact.
Maybe the new system requires 75% of the servers the previous one did, leading to increased revenue. Maybe it’s quicker, resulting in customers experiencing better service, and so churn is reduced from the pool of people who said “I like it, but it’s too slow”.
A queueing system in itself isn’t of any value to the business as a whole.
Yes, but at some point there's serious diminishing returns on having every layer of the organization forced to rationalize their behavior this way.
If you're an average developer at a non-startup you've likely been asked to put together a queuing system. You didn't decide to do that work, and you probably shouldn't be spending weeks trying to gather the information you might need to justify that work.
Your job wasn't to figure out 13.2% of your customers opt out of your paid reporting services because they experience slow responses and unrecorded data.
Your job is usually much closer to -
PM - Can we make this service faster? We're losing customers on this feature because it's slow.
TeamLead - Probably, we can re-implement our queuing to be quicker and more reliable.
Dev - Ok, I'll investigate [x] queuing library or service
Then you should be spending your limited time and energy on actually producing that result. The technical task is usually quite complicated (ex: here's just the table of contents for RabbitMQ https://www.rabbitmq.com/documentation.html).
The justification for the work wasn't your job to put together (although asking sane questions is usually a good call). Your justification was simply "My PM/TeamLead asked me for it".
----
I sure as hell don't want every junior dev on my team going out and trying to tease out the intrinsic business value of every task I give them.
That's a waste of my resources. That doubles up the effort that I already expect my PM to be doing. That leads to disagreements about priorities when those junior devs don't have the context about why a business decision was made and either infer it incorrectly, or spend lots of time asking when it really just doesn't impact them all that much.
OKRs are normally set at the team level and above. IMO individual OKRs are an anti-pattern, unless you’re using them for individual development goals (complete this training, etc.)
Determining the business value of your team’s various goals should be your PM’s responsibility, with input and help from your team.
In your example interaction, the only missing piece is a more specific impact estimate. Rather than “We’re losing customers on this feature because it’s slow.”, you’d want your PM to say “If this feature was X% faster, we estimate that it would reduce churn by Y% per quarter, which is worth approximately $Z/quarter to the business.” Your team can then estimate eng cost to make that improvement, and see where the benefit/cost ratio falls relative to the other things you can be working on.
When you're a salesperson, you don't have to agonize over how to illustrate that you've "Decreased Customer Churn Rate by 10%". When you're an assembly line worker, you don't have to find a way to figure out how you contributed to "produce 15% more widgets". You either accomplish these things or you don't - they are literally descriptions of your performance in your job. Either way, nobody cares how you did (or didn't) do it.
When you're a developer, how are you supposed to show that you've helped "increase customer value from $N to $N+5"? Because "shipped the new version of the message queuing system" is not in anyone's list of key results.
OKRs feel like "organized SMART goals", and so I have the same criticism of them as I do of SMART goals: they're just another way of conceptualizing goals around things that are already easily and directly measured. No one has made any progress in quantifying the contributions of roles that don't have direct percentage impacts on dollars earned or dollars saved.