What’s Wrong with DORA?

What’s Wrong with DORA?

DevOps Research and Assessment (DORA) has popularised four key metrics for measuring software delivery and IT performance:

  1. Deployment Frequency
  2. Lead Time for Changes
  3. Change Failure Rate
  4. Time to Restore (1)

According to DORA, optimising these metrics leads to higher productivity, profitability, and market share. But their laser focus on velocity overlooks quality of outcomes. DORA fails to emphasise whether software updates actually meet user needs or deliver more business value. See also: The Antimatter Principle.

Overly Prescriptive Approach

Promoting these four metrics applies a one-size-fits-all DevOps model that may not suit every organisation. DORA’s rigid framework limits flexibility for companies to tailor practices to their unique needs (and the needs of all the Folks That Matter™).

Shovelling Shit Faster

Nowhere does DORA stress measuring if software improves customers’ lives. Their model incentivises shipping code changes rapidly – without considering real-world impact. For example, faster deployment cycles could degrade instead of improve products if quality is not continuously validated.

DORA says nothing about ensuring “done” items provide tangible value to users. And lowering change failure rates matters little for those issues originating from deficient system architectures rather than deployment processes. Faster restoration loses impact without resilient foundations.”.

Quality Metrics

In essence, DORA overlooks a core, fifth metric: Quality of Outcomes. This measures whether frequent code deployments actually deliver business value and satisfy customers. Velocity means little without user-centric data on software effectiveness.

Their models push maximum development speed rather than solutions optimized for needs. Quality cannot be an afterthought. DevOps connects culture, outcomes, and technical execution. DORA would better serve the industry by emphasizing value over velocity.

Questionable Data Analysis

While DORA’s reports reference data from thousands of technology professionals, their research methodology and data analysis comes under scrutiny. For example, their surveys may have sampling issues or lack statistical significance testing of findings. Correlations around improved IT performance are presented as definitive without enough controlled studies.

Narrow Focus

DORA’s reports concentrate almost exclusively on internal software development lifecycle processes. But DevOps success depends on many human and cultural dimensions that DORA largely ignores. Collaboration, security culture, communication protocols, and learning disciplines play key roles as well.

Emphasis on Speed

In striving for faster delivery of technology changes, DORA overlooks the dangers of moving too hastily. Pushing out more deployments is not valuable if quality suffers. And accelerated velocity risks increasing technical debt and architectural risks over time.

Commercial Interests

While positioned as an impartial research organisation, DORA was founded by – and continues to promote – DevOps platform vendors. These commercial interests raise questions around potential bias in their perspectives and findings.

Conclusion

DORA has stimulated valuable conversations around improving development and operations. However, as with any prescriptive framework, organisations might choose to scrutinise its limitations and find the right DevOps model for their own needs. There is no universal approach for DevOps excellence.

Personally, I’d never recommend DORA to my clients.

Footnote

1) “Time to Restore” or “Mean Time to Restore (MTTR)” is one of the four key metrics that DORA highlights for measuring DevOps/IT performance.

It refers to the average time it takes to recover and restore service when an incident, outage, or defect that impacts users occurs in production. Some examples:

  • If a server goes down, MTTR measures how long it takes on average to get that server back up and running again.
  • If a new software update causes functionality bugs, MTTR measures the average time from when the defective update was released to when it was rolled back or fixed and normal operation was restored.

So in summary, Time to Restore tracks the speed of recovery from production issues and disruptions. DORA advocates minimizing MTTR to improve availability and reduce downtime impacts on the business.

Leave a comment