Join CircleCI, Incident.io, and Jellyfish in our LIVE discussion: RSVP to save your spot!
What are DORA metrics and why should you care?
Do you have a system for understanding how your software engineering processes are performing? Are you confident that your processes and practices actively support high-quality outcomes?
Today’s tech companies operate in an increasingly performance-driven culture, and that means they need to be able to measure the performance of their development teams if they want to drive success in the long term. In this context, there’s a growing number of engineering leaders that are leveraging DevOps research and assessment (DORA) metrics to better understand their team’s performance.
Named after a Google research group that analyzed DevOps practices and capabilities to measure software development and delivery performance, DORA metrics are a set of four performance indicators that collectively provide key insights into workflow effectiveness. This visibility and understanding can help teams increase velocity, reduce operational costs, and better set customer expectations.
In this blog post we’re exploring what DORA metrics are, how they’re measured, and the benefits they provide to engineering teams.
What are DORA metrics, exactly?
DORA metrics are a set of four metrics that were identified by Google’s DORA team after six years of research. Helping to solve the common challenge of measuring DevOps performance, they serve as a standard mechanism for engineering organizations worldwide to evaluate and improve their software development practices. By tracking these metrics and displaying them on easily accessible dashboards, organizations can get a clear view of their DevOps performance and identify areas for improvement.
The four DORA metrics include:
- Deployment Frequency: how often code is deployed to production
- Lead Time for Changes: the amount of time it takes for a commit to get into production
- Change Failure Rate: the percentage of deployments causing failures in production
- Mean Time to Recovery: The time it takes to recover from a failure in production
While each of these metrics offer important information on their own, they are best used collectively to get a clearer understanding of where the gaps are, and where a team could stand to improve. What follows is a deeper exploration of each of the metrics.
Deployment Frequency
Deployment frequency measures how often a team successfully deploys code for a given application. Depending on a team’s goals and the type of application it serves, this can be measured on a daily, weekly, or even monthly basis. That said, high-performing DevOps teams typically aim to deploy multiple small features, regularly.
In Google’s State of DevOps 2021 Report, the DORA team set clear benchmarks for each of the metrics. For Deployment Frequency, the performance categories include the following:
- Elite performers: Multiple deployments a day
- High performers: Between one deployment a day and one a week
- Medium performers: Between one deployment a week and one a month
- Low performers: Between one deployment a month and one every six months
While frequency is one aspect of the metric, the other component is a successful deployment. As such, teams need to define what “successful” means within the context of deploying code, and they need to have a system for classifying a deployment as such.
Lead Time for Changes
This measures the time between when code is committed to when it reaches production. In other words, this metric helps teams understand the speed of their software delivery. At the end of the day, the lower the lead time for changes, the more efficient a DevOps team is in deploying code.
There are two pieces of data that are used to make this measurement: the exact time of the commit and the exact time of the deployment. The average time across a given time is then used to assess overall performance.
The DORA team’s benchmarks for this metric are as follows:
- Elite performers: Less than one hour
- High performers: One day to one week
- Medium performers: One month to six months
- Low performers: More than six months
Ensuring that you don’t sacrifice quality to improve the speed of change is important. By offering templates with best practices and standards baked in for your team to self-serve things like service creation and other common operational tasks, you can reduce lead time to change without sacrificing quality.
Change Failure Rate
Also known as CFR, this is the percentage of code releases that result in downtime, degraded service, or rollbacks. While the previous two metrics are looking at speed and efficiency, CFR looks at the quality and stability of the code.
The Change Failure Rate is calculated as an average by tallying the number of deployment failures and dividing that by the number of deployments. Tracked over time, this helps teams understand how much time they’re spending on fixing and remediating errors instead of delivering new codes.
When it comes to the benchmarks for this metric, the goal is to have as low a percentage as possible:
- Elite performers: 0–15%
- High, medium, and low performers: 16–30%
Anything that’s over 30% indicates a significant problem in your team’s code quality and reliability and would require a rehaul of how they build code.
Mean Time to Recovery
Otherwise known as MTTR, the Mean Time to Recover measures the average time it takes to restore service when there’s a disruption. Even if your DevOps team is high performing, incidents happen, and they need to be well equipped to respond to and remedy a situation. Teams with a low MTTR are more likely to feel comfortable experimenting and innovating, and finding new ways to enhance their product—thus potentially increasing the company’s competitive advantage.
When it comes to measuring MTTR, it’s often calculated by tracking the average time between a bug or incident report and the moment a fix is deployed.
According to DORA, the performance benchmarks for this metric are:
- Elite performers: Less than one hour
- High performers: Less than one day
- Medium performers: From one day to one week
- Low performers: Over six months
Leaders can help their teams respond to incidents quickly and effectively by giving them visibility into the entire software ecosystem (and ensuring the software catalog is up to date) and having a clear incident response program.
The benefits of using DORA metrics to measure DevOps performance
While software development is inherently a science- and data-driven process, there are a lot of abstract processes that are hard to measure. That, in itself, was the whole impetus behind the DORA team doing the work they did to clarify these metrics. Measuring these metrics thus adds more stability and tangibility into the process. By tracking DORA metrics, organizations are better positioned to identify areas of improvement and then optimize their development processes. This ultimately results in teams delivering higher-quality software, faster—something that’s vital for today’s performance-driven companies.
Other key benefits include:
- Data-driven decision making
- Optimized workflows
- Increased value delivery
- A culture of excellence within the engineering org
Challenges of measuring DORA metrics
While DORA metrics provide much needed structure within DevOps teams, there are also challenges that come with using them. For many engineering teams, there is a cultural and procedural shift that needs to happen if they want to measure the four metrics effectively.
Some challenges to keep in mind include:
- Decentralized data that is hard to consolidate
- Data that’s only available in raw format and can’t be easily manipulated
- Data that needs to be transformed into easily measurable units
In addition, it’s important to frame each of the DORA metrics within the right context. As we outlined above, the Deployment Frequency won’t actually speak to the quality or stability of the code, so it should be leveraged in the right situations.
Building a culture of continuous improvement with DORA metrics
A core tenet of the DevOps methodology is continuous and iterative improvement. Introducing core performance metrics like DORA, which were exclusively designed for DevOps teams, is an important way to establish a culture of ongoing growth. With the right data, teams can better understand where the gaps are and then prioritize areas of improvement. These can then be prioritized based on the goals and objectives of the team, as they relate to the DORA metrics.
At OpsLevel, we’re helping product teams build, ship, and fix code faster. Start a free trial to see how.