Principles Of Agile Metrics

Recorded at Lean Agile Manchester on 21st June 2017
Sunil Mundra: Principles of Agile Metrics

Transcript

All right, I’m back again with my second topic, the challenge which I have, and I’ve taken this challenge. This is a conference talk which I delivered and I had one hour to deliver this talk. Now I’m going to deliver this in 10 minutes, so let’s see how I do, all right? I’m going to rush through. Happy to take questions, have a discussion post all the talks, and I’m more than happy to talk about it. This is a topic which is very, very near and dear to my heart for multiple reasons, all right?

Let’s first understand if we look at principals, we do we have metrics? Yeah, it’s important to understand this, because that will help us in designing our metrics. There’s much more reasons why we have metrics. Measure outcomes, okay? You want to track progress, you’ll track both effectiveness and efficiency. You want to guide decision making, so based on the data you want to decide the way forward.

Last but not the least, which people forget, is you want to influence behaviours, because people behave the way you measure them. I think this is one of the most important things people forget when they’re introducing agility, is that they keep the old metrics, and try to introduce new practises. When they don’t work, people just keep wondering why they don’t work. This is the reason. It’s not influencing behaviours.

All right. We look at metrics and why don’t traditional metrics work in an Agile world? That’s because Agile is different. Why is Agile different? Let’s look at some of those things, what makes it different. It’s recognising that software is not as, it’s not a typical manufacturing, repetitive type of activity. It’s created as a social activity. Today I think there is recognition that software development is more a social activity rather than a coding activity. There’s a lot of conversations that need to happen for good quality, valuable software to come out. Software is not tangible, and it’s to be delivered continuously. Of course, it has to be valued based on prioritisation.

So let’s look at some of the chief principles. Outcome or activity? This is one problem which I’ve found in all my consulting engagements and I’ve had real difficulty in convincing people to appreciate the difference between activity, output and outcome. Any of you have that problem? Yes? So the problem is that in traditional ways of working, we try to optimise activities and therefore we try to measure them. But in Agility, what you’re interested in is outcomes, and to separate activities and output from outcomes and measurable outcome is I think really the key to having an effective metric.

Value or volume, right? It’s not about the number of hours you worked. It’s not about how many lines of code you have written. Not too long ago, people were measuring KLOC, yeah? It’s about the value that we are trying to deliver, that’s what needs to be measured.

Trends or absolute numbers? ‘Oh, the velocity has dropped’. Yeah, for an iteration and the manager is pulling out hair, right? What happened? It’s bound to happen, right? For multiple reasons. So surely it doesn’t mean that you don’t look at the drop, but is that enough for anyone to initiate any action? So what is important is the trend, more than just looking at a number. There is a famous industrialist in India, his name is Ratan Tata. He owns Jaguar motors, by the way, in the UK. One of the statements which he made is that, ‘If the line is straight, that is a heartbeat line, it means you’re dead.’ So the line is always going to be varying up and down. What is important therefore is to look at the trend. There’s going to be ups and downs but the important thing is to look at the trends when we take actions or try to interpret our metrics.

Assessment or measurement? There’s a lot of things we try to measure, there’s a lot of vanity metrics which we create. There are a lot of things which vanity metrics really don’t add value to, so therefore what is important is what is giving us real, meaningful information? How can we actually understand what is happening on the ground and what is telling us the truth, and that assessment is more important than just the measurement of it.

Improvement or fault finding? What’s the first reaction when something goes wrong. What’s the manager’s reaction? ‘Whose neck can I catch?’ It’s not just about finding that fault and just blaming someone for something going wrong. The way we need to look at metrics is, or the way we need to design metrics is, can they tell us areas of improvement? Cumulative flow diagram is a fantastic metric as was mentioned in the previous talk, right? It tells us where the bottlenecks are and what part of the process we need to improve, to improve the flow. Those are the types of metrics that we need to look at.

Transparent and visible at all times. So the whole team needs to know what we are measuring and the whole team needs to know how we are doing on that measure. It’s not about suddenly some software earning up a metric magically and saying, ‘Ta-da, oh here’s how you’ve done.’ That doesn’t help. We need to maintain transparency. What we are measuring, why are we measuring that, and making that visible to all the 11 stakeholders including the. This is another thing which we see, is we are so orienting towards measuring individual performance but Agility, as we know, is a lot about team effort. So you don’t want to track, I have seen instances where, ‘Oh, how many story points has a developer delivered in an iteration?’ Does that help? Not really, right? You’re introducing wrong behaviours when to try to measure stuff at the individual, you’re introducing optimization of the wrong level. You don’t want to track stuff at the individual level when you are tracking delivery of value which the team is going to deliver. So we’ve got to have metrics which are at the team level and not the individual level.

Not be prone to gaming. ‘I want velocity increased by 30%’. What happens? Then will somebody will … a 3 pointer becomes a 5 pointer, and a 1 pointer becomes a 3 pointer, and lo and behold your velocity’s gone up by 30%. Yeah? So we need to be careful about the way we use our metrics. What are we using our metrics to measure? What’s the real value of calculating velocity. Is it to tell us or to help us calculate how much we can deliver in the future, and what’s the cadence of the team, or is it a way to measure the productivity of the team and what are you trying to do to enhance that productivity. We need to be careful about that. It is very hard to design a metric which is gaming free, but I think we can try to make some efforts towards making it less gameable.

You’ve all seen this, Team A is delivering 20 points, Team B is delivering 30 points so Team B is better than Team A. Wrong, isn’t it? The team which is delivering lower will switch the estimate from 1 point to 10 points. So 1 becomes 10, 2 becomes 20, 3 becomes 30, so there’s no point in comparing teams across these kinds of numbers.

We need to have metrics, I think I could have used a better word there. What I mean is that we need to look at metrics holistically. For example, you might set a target to increase velocity and the team is actually going faster, but at the cost of quality. When you are looking at velocity, you also need to look at quality numbers to ensure that quality is not comprised at the cost of going faster. That’s why when you look at metrics, you shouldn’t be looking at a single metric in isolation but you need to look at it holistically in terms of ensuring that the other variables or the other aspects of delivery don’t crumble and the team is and to optimising something at the cost of something else.

Linking something to a monetary incentive as a metric is a recipe for disaster. This usual doesn’t happen with delivery teams but it happens at slightly higher levels where you give some productivity linked bonuses or something like that and usually the outcome is suboptimal from a systemic perspective. I’m proud to say that at ThoughtWorks, we are one of the few companies who don’t pay commission to our salespeople. We used to, but we realises that that introduces wrong behaviours and we moved away from that. So linking something to a monetary incentive will almost always result in people optimising locally, or people gaming the metric. Therefore, we need to be careful about that.

We need incentivize right behaviours. The most common example which I see is that you want to testers and developers to collaborate. But at the same time the tester is measured on the number of defects they find during development. If you’re going to be doing that, what is the incentive for the tester to collaborate? Is that metric incentivizing the right behaviour? Is it about number of defects, or is it about delivering software which makes the customer happy and actually which has no defects. What do we really want to achieve? WE need to measure that and we need to have metrics accordingly.

Metrics should be actionable. Again coming back to cumulative flow diagrams, that’s an actionable metric, you know exactly where the problem is and what action you need to take. It’s the pointer right, it could be because there are dependencies that part of the process is rocked or it could be because if the sheer lack of capacity in that part of the process. So you need to investigate and find out but it leads you to investigate something which results in an actionable. Rather than say, ‘Oh, yeah, my development is 100% done, but testing is only 50% done.’ It doesn’t leave you much scope for taking action. And dashboards. Be vigilant about them. What do we see in most traditional processes? Project, plan. It’s all green and it’s 90%, right. And the last 10%, hell breaks loose. Why? Because it’s the culmination of things, so when you say its red, or green, or amber, at the dashboard level, there might be things that are hidden which are not telling the real story what’s on the ground. When we really look at dashboard switches, obstruction of marketable metrics, we need to be careful about that.

That’s it. Thank you.

Further information...

The content of this article is just the tip of the iceburg. To dive deeper into any of these case studies or concepts join our 2 day Portfolio Kanban live online course.

Application for a free Agile Coaching session

I would like to speak with an advisor