Agile Metrics
Summary: Agile metrics provide insight into productivity through the different stages of a software development lifecycle. This helps to assess the quality of a product and track team performance.
Metrics are a touchy subject.
On the one hand, we've all been on a project where no data of any kind was tracked, and it was hard to tell whether we're on track for release or getting more efficient as we go along. On the other hand, many of us have had the misfortune of being on a projects where stats were used as a weapon, pitting one team against another or justifying mandatory weekend work. So it's no surprise that most teams have a love/hate relationship with metrics.
But it doesn't have to be this way. Tracking and sharing sound agile metrics can reduce confusion and shine a light on the team's progress (and setbacks) throughout the development cycle. Here's how.
Know your business
"Done" only tells half the story. It's about building the right product, at the right time, for the right market. Staying on track throughout the program means collecting and analyzing relevant data along the way. In any agile program, it's important to track both business metrics and agile metrics. Business metrics focus on whether the solution is meeting the market need, and agile metrics measure aspects of the development process.
"A program's business metrics should be rooted in its roadmap."
For each initiative on the roadmap, include several key performance indicators (KPIs) that map to the program's goals. In addition, include success criteria for each product requirement such as adoption rate by end users or percentage of code covered by automated tests. These success criteria feed into the program's agile metrics. And the more teams learn, the better they can adapt and evolve.
How to use agile KPI metrics to optimize your delivery
Sprint burndown
Scrum teams organize development into time-boxed sprints. At the outset of the sprint, the team forecasts how much work they can complete during a sprint. A sprint burndown report then tracks the completion of work throughout the sprint. The x-axis represents time, and the y-axis refers to the amount of work left to complete, measured in either story points or hours. The goal is to have all the forecasted work completed by the end of the sprint.
A team that consistently meets its forecast is a compelling advertisement for agile in their organization. But don't let that tempt you to fudge the numbers by declaring an item complete before it really is. It may look good in the short term, but in the long run, it only hampers learning and improvement.
- The team finishes early sprint after sprint because they aren't committing to enough work.
- The team misses their forecast sprint after sprint because they're committing to too much work.
- The burndown line makes steep drops rather than a more gradual burndown because the work hasn't been broken down into granular pieces.
- The product owner adds or changes the scope mid-sprint.
Epic and release burndown
Epic and release (or version) burndown charts track the progress of development over a larger body of work than the sprint burndown, and guide development for both scrum and kanban teams. Since a sprint (for scrum teams) may contain work from several epics and versions, it's important to track both the progress of individual sprints as well as epics and versions.
"Scope creep" is the injection of more requirements into a previously-defined project. For example, if the team is delivering a new website for the company, scope creep would be asking for new features after the initial requirements had been sketched out. While tolerating scope creep during a sprint is bad practice, scope change within epics and versions is a natural consequence of agile development. As the team moves through the project, the product owner may decide to take on or remove work based on what they're learning. The epic and release burn down charts keep everyone aware of the ebb and flow of work inside the epic and version.
- Epic or release forecasts aren't updated as the team churns through the work.
- No progress is made over a period of several iterations.
- Chronic scope creep, which may be a sign that the product owner doesn't fully understand the problem that body of work is trying to solve.
- Scope grows faster than the team can absorb it.
- The team isn't shipping incremental releases throughout the development of an epic.
Velocity
Velocity is the average amount of work a scrum team completes during a sprint, measured in either story points or hours, and is very useful for forecasting. The product owner can use velocity to predict how quickly a team can work through the backlog, because the report tracks the forecasted and completed work over several iterations–the more iterations, the more accurate the forecast.
Let's say the product owner wants to complete 500 story points in the backlog. We know that the development team generally completes 50 story points per iteration. The product owner can reasonably assume the team will need 10 iterations (give or take) to complete the required work.
It's important to monitor how velocity evolves over time. New teams can expect to see an increase in velocity as the team optimizes relationships and the work process. Existing teams can track their velocity to ensure consistent performance over time, and can confirm that a particular process change made improvements or not. A decrease in average velocity is usually a sign that some part of the team's development process has become inefficient and should be brought up at the next retrospective.
When velocity is erratic over a long period of time, always revisit the team's estimation practices. During the team's retrospective, ask the following questions:
- Are there unforeseen development challenges we didn't account for when estimating this work? How can we better break down work to uncover some of these challenges?
- Is there outside business pressure pushing the team beyond its limits? Is adherance to development best practices suffering as a result?
- As a team, are we overzealous in forecasting for the sprint?
Each team’s velocity is unique. If team A has a velocity of 50 and team B has a velocity of 75, it doesn't mean that team B has higher throughput. Since each team's estimation culture is unique, their velocity will be as well. Resist the temptation to compare velocity across teams. Measure the level of effort and output of work based on each team's unique interpretation of story points.
Control chart
Control charts focus on the cycle time of individual issues–the total time from "in progress" to "done". Teams with shorter cycle times are likely to have higher throughput, and teams with consistent cycle times across many issues are more predictable in delivering work. While cycle time is a primary metric for kanban teams, scrum teams can benefit from optimized cycle time as well.
Measuring cycle time is an efficient and flexible way to improve a team's processes because the results of changes are discernable almost immediately, allowing them to make any further adjustments right away. The end goal is to have a consistent and short cycle time, regardless of the type of work (new feature, technical debt, etc.).
Control charts can appear fickle at first. Don't be so concerned with every outlier. Look for trends. Here are two areas to watch out for:
- Increasing cycle time - Increasing cycle time saps the team of its hard-earned agility. In the team retrospective, take time to understand an increase. One exception: if the team's definition of done has expanded, cycle time will probably expand too.
- Erratic cycle time – The goal is to have consistent cycle time for work items that have similar story point values. Filter the control chart for each story point value to check for consistency. If cycle time is erratic on small and large story point values, spend time in the retrospective examining the misses and improving future estimation.
Cumulative flow diagram
The cumulative flow diagram should look smooth(ish) from left to right. Bubbles or gaps in any one color indicate shortages and bottlenecks, so when you see one, look for ways to smooth out color bands across the chart.
- Blocking issues create large backups in some parts of the process and starvation in others.
- Unchecked backlog growth over time. This results from product owners not closing issues that are obsolete or simply too low in priority to ever be pulled in.
Even more metrics
Good metrics aren't limited to the reports discussed above. For example, quality is an important metric for agile teams and there are a number of traditional metrics that can be applied to agile development:
- How many defects are found:
- during development?
- after release to customers?
- by people outside of the team?
- How many defects are deferred to a future release?
- How many customer support requests are coming in?
- What is the percentage of automated test coverage?
Agile teams should also look at release frequency and delivery speed. At the end of each sprint, the team should release software out to production. How often is that actually happening? Are most release builds getting shipped? In the same vein, how long does it take for the team to release an emergency fix out to production? Is release easy for the team or does it require heroics?
Find insights in context
Insights are a great tool for teams to access metrics when they need them: during sprint planning, checking in at the daily standup, or optimizing delivery. You can find insights in the upper right-hand corner of the board, backlog, and deployments view of Jira.
Insights give a visual snapshot of the following metrics:
- Sprint progress
- Issue type breakdown
- Sprint commitment
- Deployment frequency
- Cycle time
Use these metrics to continuously optimize team performance. Learn more about insights.