Coverage Decay

Coverage decay is the gradual erosion of test coverage that happens when a codebase changes faster than its tests can keep up — new code paths ship without tests, old tests cover paths that no longer exist, and the gap widens silently. AI coding agents accelerate decay because they ship code faster than humans write tests.

In one sentence

Coverage decay is the gap between what a codebase does and what its tests verify, growing over time because code ships faster than tests are written or updated.

Why "decay" rather than "gap"

Coverage gaps are a state. Decay is a rate. Treating coverage as a rate makes it actionable: a team can ship at a coverage decay rate of zero (every change comes with tests), or accept a positive decay rate as technical debt accruing per sprint. Most teams don't measure either.

How AI coding agents accelerate decay

AI coding agents author code faster than humans write tests. In typical agent-driven teams:

  • Code velocity increases 2–5×.
  • Test-author velocity is roughly flat (humans still author most tests).
  • Coverage decay rate per sprint goes positive; the gap compounds.

Without intent-based test generation by an AI agent, the decay is structural: humans cannot keep up with agent code velocity by hand.

How to measure

Most teams measure overall coverage percentage, which hides decay. Better: changed surface area coverage — the % of code modified in a given window (e.g. last 30 days) that has direct test coverage.

MetricWhat it tells you
Total coverage %Baseline; useful but lagging
Changed surface area coverageWhether your tests are keeping up with current change
Coverage decay rate(Changes-without-tests) / (changes per sprint), positive = decay

A team can have 90% total coverage and 30% changed-surface-area coverage. The first is a vanity metric; the second is the real signal.

How to halt decay

  • Generate tests in the same loop as code — agentic QA, where the coding agent invokes an agent-native QA tool to generate tests for its own changes.
  • Block merges on changed-surface-area coverage — not on total coverage.
  • Audit tier placement — see if tests covering changed code are in the right CI tier (pre-merge vs scheduled).

What coverage decay is not

  • Not the same as low coverage — a team can have low total coverage and zero decay if every new change ships with tests.
  • Not a measurement of test quality — only of presence. A passing test that doesn't actually exercise the change still counts as coverage in most tools.

Related terms