OpsLevel Logo
Product

Visibility

Catalog

Keep an automated record of truth

Integrations

Unify your entire tech stack

OpsLevel AI

Restoring knowledge & generating insight

Extensibility

Customize to meet your team’s needs

Standards

Scorecards

Measure and improve software health

Campaigns

Action on cross-cutting initiatives with ease

Checks

Get actionable insights

Developer Autonomy

Self-service

Empower devs to do more on their own

Knowledge Center

Tap into API & Tech Docs in one single place

Featured Resource

Announcing Custom Integrations: your engineering data, your way
Announcing Custom Integrations: your engineering data, your way
Read more
Solutions

By Team

Platform Engineering

Empower teams with scalable platforms for faster, safer delivery.

Site Reliability

Ensure resilient systems with observability, automation, and reliability.

By Use Case

Improve Software Standards

Define, track, and enforce standards to strengthen software quality.

Developer Productivity

Accelerate developer workflows by removing friction and enabling focus.

Featured Resource

Innovation dies by a thousand paper cuts (here's the fix)
Innovation dies by a thousand paper cuts (here's the fix)
Read more
Customers
Our customers

We support leading engineering teams to deliver high-quality software, faster.

Customer reviews
Customer stories
Hudl
Hudl goes from Rookie to MVP with OpsLevel
Read more
Read more
Hudl
Keller Williams
Keller Williams’ software catalog becomes a vital source of truth
Read more
Read more
Keller Williams
Hootsuite
How Hootsuite Created a Robust Service Catalog for 700+ Microservices Using OpsLevel
Read more
Read more
Hootsuite
Resources
Our resources

Explore our library of helpful resources and learn what your team can do with OpsLevel.

All resources

Resource types

Blog

Resources, tips, and the latest in engineering insights

Guide

Practical resources to roll out new programs and features

Demo

Videos of our product and features

Events

Live and on-demand conversations

Interactive Demo

See OpsLevel in action

Pricing

Flexible and designed for your unique needs

Docs
Log In
Book a demo
Log In
Book a demo
No items found.
Share this
Talk to us about standards automation
Table of contents
 link
 
Resources
Blog

Innovation dies by a thousand paper cuts (here's the fix)

Checks
Automation
Campaigns
Engineering leadership
Standardization
Scorecard
Innovation dies by a thousand paper cuts (here's the fix)
Megan Dorcey
|
December 6, 2025

Innovation dies by a thousand paper cuts (here’s the fix)

You already know “innovation” doesn’t mean hack-week energy and big-bang rewrites. For engineering leaders, innovation is a capacity problem: do we have enough focused developer attention to ship meaningful improvements without burning the team out?

The problem is that innovation rarely gets blocked by one big thing. It gets suffocated by a pile of small things: re-litigating decisions in PRs, inconsistent service readiness, chasing dependency drift, routing follow-ups, updating docs, fixing flaky tests, and rebuilding the same “baseline checks” every time a new service shows up. Those paper cuts don’t just hurt—they compound.

A lot of that compounding pain has a name: standards drift—when “what good looks like” isn’t explicit and repeatable, and your org pays to rediscover it over and over.

The paper cuts you’re actually paying for

Standards drift is what happens when “what good looks like” exists in people’s heads, old wiki pages, or scattered PR comments—and slowly stops being true.

It shows up as:

  • Inconsistent service readiness
    Team A says “every service needs an owner + on-call + runbook.” Team B ships without it. Three months later, it’s your incident commander paying the price.
  • Duplicate decisions in every review
    PRs become a re-litigation of conventions: linting, naming, dependency policy, logging, alert thresholds, rollbacks, “we usually do it this way…”
  • Scorecards and quality programs that stall
    The standards are reasonable, but creating them feels like “one more thing.” So teams recreate tiny parts, differently, forever.
  • Security and reliability “baselines” that are aspirational
    Patch cadence, dependency freshness, secrets scanning, SLO ownership—everyone agrees, yet reality diverges.

Here’s the important part: this drift isn’t because teams don’t care. It’s because maintenance is continuous (dependencies, flaky tests, patches, small refactors… and yes, standards drift) and humans will prioritize product work over repetitive upkeep every single time.

Why this slows leaders down (even with great engineers)

When standards drift, you incur cost in three compounding ways:

1) You pay senior engineers to do junior work

Not because the work is beneath them—because the organization needs their judgment to re-decide basics.

That’s the hidden tax: high-context people doing repeatable work.

2) You multiply coordination and “babysitting”

Even if you have scripts and tooling, you still burn cycles on the last-mile work: deciding what to do, opening PRs, routing reviews, following up, updating tickets, and nudging merges.

3) You increase interruption load

Drift creates more “surprise work” (incidents, escalations, “why isn’t this standard true here?”). Attention becomes fragmented, and innovation slows.

Great teams treat developer attention as scarce and design systems around protecting it.

The gut-check: if you repeat it, standardize it

Here’s the simplest heuristic in this entire post:

If a task or decision is repeatable, engineers shouldn’t be re-deciding it every time.

That doesn’t mean “write a giant policies doc.” It means: identify the repeatable judgments and encode them as reusable standards.

Repeatable standards examples (steal these)

Below are high-signal examples you can adapt. Notice they’re not “technology choices”—they’re operating expectations.

Service ownership & readiness

  • Every service has an explicit owner (team), on-call rotation, and escalation path
  • A runbook exists and is linked (even if short)
  • Defined “how to rollback” for risky changes
  • Known dependencies (at least the critical ones)
  • Deployment and alerting “minimum bar”

Reliability basics

  • SLOs/SLIs exist for critical paths
  • Alert noise targets (e.g., “no more than X pages/week” as a goal)
  • On-call handoff expectations
  • Post-incident learning artifacts (even lightweight)

Dependency and patch hygiene

  • Dependency update cadence (e.g., monthly baseline, urgent security within N days)
  • Visibility into “stale deps” and exceptions
  • Standard approach to breaking changes (owner + plan)

Operational quality (the “paper cuts” killers)

  • Logging fields / tracing requirements
  • Standard health checks
  • Environment config conventions
  • “Golden” CI steps (unit tests + lint + security scan)
  • Release notes expectations

Platform expectations

  • Service catalog fields that must be accurate (owner, tier, runtime, repo link)
  • Standard annotations/metadata needed for governance and automation

Do this audit with your staff team this week

Run this with your staff engineers, EMs, or platform team. Timebox it. You’re hunting for the biggest repeatable friction, not creating a perfect framework.

Get the full workbook to share with your team.

Step 1: Identify your top 10 “re-litigated standards”

Ask:

  • “What do we explain in PR reviews over and over?”
  • “What does every team implement slightly differently?”
  • “What standards do we believe we have, but can’t trust across services?”
  • “When someone asks ‘how do we do this here?’, where do they look besides docs?”

Write the list.

Step 2: Score each standard with 3 numbers (0–2)

For each item, score:

  1. Explicit: Is the standard written down in a place teams actually use?
  2. Reusable: Can a team apply it without rewriting/recreating it?
  3. Enforced: Does it stay true without heroics?

Your highest leverage targets are often the ones that are important + repeated + low score.

Step 3: Tag each standard as one of three buckets

This helps you choose what to fix first.

  • Low-risk, high-volume (do first)
    Examples: linting, formatting, dependency bumps, doc updates, CI defaults, metadata consistency.
  • Medium-risk, moderate-volume
    Examples: small refactors, flaky test fixes, config updates.
  • High-risk (do later)
    Examples: large auth changes, production behavior changes without strong tests.

This risk segmentation matters because safe automation starts with tight scopes and measurable error rates.

Step 4: Pick 3 “standards to make real” in 30 days

Not 30. Three.

For each of the three, define:

  • The standard (plain English)
  • What evidence proves it’s true (what you can check)
  • Who owns exceptions (and how exceptions are documented)

That’s enough to start.

Three fixes that don’t create bureaucracy

This is where engineering leaders can make engineers the heroes.

1) Make standards opinionated and small

A standard that’s never applied is just a guilt trip.

The best standards:

  • reduce repeated decisions
  • are easy to validate
  • don’t require a migration to start
  • get adopted because they make teams faster

2) Make the work reviewable (not invisible)

This principle will matter even more as you adopt automation and agents:

“If it can’t be reviewed, it shouldn’t be automated.”

That’s how you protect quality and earn trust.

3) Treat standards as “context,” not bureaucracy

In engineering AI, context beats cleverness. Without context—ownership, conventions, standards, dependencies—adoption fails.

Standards are part of the context stack. They reduce wrongness.

4) Don’t confuse scripts with solved

Scripts automate steps. But the coordination cost still kills you: deciding the right change, opening PRs, routing reviews, following up, updating tickets.

Fixing drift means reducing the end-to-end toil, not just one step.

The path: write it down, reuse it, keep it true

This is the through-line you can use to lead thought leadership now and make future automation feel obvious.

  • Explicit: “what good looks like” is written down and discoverable
  • Reusable: teams don’t rewrite it; they apply it
  • Sustainable: standards stay true as the world changes (because maintenance is continuous)

If your org is stuck, it’s usually because you’re trying to jump from implicit to sustainable without making standards reusable first.

The hero move (for engineering leaders): protect attention

Engineers want to build. They also want to do the right thing. Drift happens when the system makes “the right thing” expensive.

Your job isn’t to demand heroics. It’s to design for them:

  • fewer repeated decisions
  • fewer interruptions
  • less coordination babysitting
  • more time spent on architecture, reliability, product outcomes

That’s what “protect innovation time” looks like in the real world.

‍

If you're ready to automate standards checks and level up without the drag on your team, book a call to learn more.

‍

More resources

AI coding assistants are everywhere, but are developers really using them?
Blog
AI coding assistants are everywhere, but are developers really using them?

AI coding tools are at maximum hype, but are teams actually getting value from this new technology?

Read more
Fast code, firm control: An AI coding adoption overview for leaders
Blog
Fast code, firm control: An AI coding adoption overview for leaders

AI is writing your code; are you ready?

Read more
What is Service Maturity?
Blog
What is Service Maturity?

Software development teams are expected to move faster than ever. But with that speed comes an increased chance of error. That’s left companies wondering: how do you balance agility with quality? In this article, we’ll look at how you can use a service maturity framework to ensure a consistent level of quality across all software engineering teams in your organization.

Read more
Product
Software catalogMaturityIntegrationsSelf-serviceKnowledge CenterBook a meeting
Company
About usCareersContact usCustomersPartnersSecurity
Resources
DocsEventsBlogPricingDemoGuide to Internal Developer PortalsGuide to Production Readiness
Comparisons
OpsLevel vs BackstageOpsLevel vs CortexOpsLevel vs Atlassian CompassOpsLevel vs Port
Subscribe
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
SOC 2AICPA SOC
© 2024 J/K Labs Inc. All rights reserved.
Terms of Use
Privacy Policy
Responsible Disclosure
By using this website, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Data Processing Agreement for more information.
Okay!