Incident Command Platform
← All resources

Incident Response Metrics: What to Measure and Industry Benchmarks

By Mark LyndPublished April 11, 202613 min read

Incident response metrics are the quantitative measures used to evaluate the speed, effectiveness, and maturity of an organization's cybersecurity incident response capability. The five metrics that matter most are mean time to detect (MTTD), mean time to respond (MTTR), mean time to contain (MTTC), regulatory compliance rate, and exercise frequency. These metrics provide the data needed to justify security investments, identify capability gaps, benchmark against industry peers, and demonstrate due diligence to regulators, auditors, and boards of directors. Without measurement, incident response improvement is guesswork.

This guide defines each metric, provides industry benchmark data from published research, and explains how to implement a measurement program that produces actionable intelligence rather than vanity dashboards. For definitions of key terms, see the IR-OS glossary.

What are the five essential incident response metrics?

Metric Definition Industry Median Best-in-Class Target Primary Source
MTTD (Mean Time to Detect) Average time from initial compromise to detection by the organization 204 days < 7 days IBM Cost of a Data Breach Report
MTTR (Mean Time to Respond) Average time from detection to first response action (triage, classification, assignment) 73 days < 4 hours IBM Cost of a Data Breach Report
MTTC (Mean Time to Contain) Average time from detection to confirmed containment of the threat 80 days < 24 hours Verizon DBIR, IBM
Regulatory Compliance Rate Percentage of notification obligations met within required deadlines Not widely reported 100% Internal measurement
Exercise Frequency Number of tabletop or functional exercises conducted per year 1 per year 4+ per year NIST SP 800-61, PCI DSS

Why is mean time to contain more important than mean time to detect?

Detection gets most of the attention in security operations, but containment time has the strongest correlation with total breach cost. The reason is straightforward: every hour between detection and containment is an hour during which the threat actor continues to operate inside the environment -- exfiltrating data, establishing persistence, moving laterally, and encrypting systems.

Research from the IBM Cost of a Data Breach Report consistently shows that organizations containing breaches in under 30 days from detection save substantial amounts compared to those taking longer. The relationship between containment speed and cost is not linear -- it is exponential in the early hours. The first 24 hours of containment delay are significantly more costly per hour than subsequent days because the blast radius is still expanding.

This is why MTTC should be the primary metric on the CISO's dashboard, not MTTD. Detection is necessary but not sufficient. Speed of containment is what determines the actual business impact of an incident.

Measurement precision matters: MTTC must be measured from the moment the incident is confirmed (not from initial alert, which may be a false positive) to the moment containment is verified (not assumed). Using inconsistent start and end points across incidents produces metrics that are not comparable and cannot be trended over time.

How do you measure MTTD when most breaches are discovered by third parties?

One of the most uncomfortable realities in incident response is that a significant percentage of breaches are discovered not by the victim organization but by external parties -- law enforcement, security researchers, customers, or the threat actors themselves (via ransom demands). The Verizon Data Breach Investigations Report (DBIR) has consistently documented this gap.

When calculating MTTD, organizations must distinguish between:

Tracking MTTD by discovery source provides a much more useful signal than a blended average. If 60% of your incidents are discovered externally, your blended MTTD is misleading -- the real story is that your detection program is missing the majority of incidents entirely. For further analysis of detection patterns, the Verizon DBIR publishes annual data on discovery methods by industry and attack type.

What benchmarks should organizations target for each metric?

Benchmarks must be understood in context. A 500-person manufacturer and a 50,000-person financial institution have fundamentally different resource levels, threat profiles, and regulatory obligations. The table below provides tiered benchmarks based on organizational maturity:

Metric Baseline (Year 1) Developing (Year 2-3) Mature (Year 4+) Elite
MTTD < 120 days < 30 days < 7 days < 24 hours
MTTR < 48 hours < 12 hours < 4 hours < 1 hour
MTTC < 14 days < 72 hours < 24 hours < 4 hours
Regulatory Compliance Rate 90%+ 95%+ 100% 100%
Exercise Frequency 1/year 2/year 4/year Monthly
Corrective Action Completion 50%+ within deadline 75%+ within deadline 90%+ within deadline 95%+ within deadline
IR Plan Review Frequency Annual Semi-annual Quarterly After every incident

These targets are informed by data from the IBM Cost of a Data Breach Report, the Verizon DBIR, and benchmarking data collected across IR-OS tabletop exercises. Organizations should set targets based on their current baseline and improve incrementally rather than attempting to jump from baseline to elite in a single year.

How do you build a metrics program that drives improvement?

Collecting metrics without a feedback loop is data hoarding, not measurement. A metrics program that drives improvement requires four components:

  1. Consistent measurement methodology -- Define the start point, end point, and data sources for each metric. Document these definitions and apply them consistently across all incidents and exercises. Changing definitions between measurement periods invalidates trend analysis.
  2. Regular reporting cadence -- Report metrics monthly to the security leadership team and quarterly to executive leadership and the board. The reporting cadence creates accountability and keeps incident response performance visible.
  3. Root cause analysis on metric trends -- When a metric moves in the wrong direction, investigate why. Was it a specific incident that skewed the average? A staffing change? A tool gap? Metrics identify the problem; root cause analysis identifies the fix.
  4. Corrective action tracking -- Every metric that misses its target should generate a corrective action with an owner and deadline. The corrective action completion rate is itself a metric that measures organizational follow-through. See the after-action review template for structuring corrective actions.
The purpose of incident response metrics is not to produce a dashboard for the board. It is to create a feedback loop that makes the next incident faster, cheaper, and less damaging than the last one. If your metrics do not change behavior, they are not working.

What secondary metrics are worth tracking beyond the core five?

Once the core five metrics are established and trending, organizations with mature programs add secondary metrics that provide deeper operational insight:

Measure what matters with IR-OS

IR-OS calculates MTTD, MTTR, MTTC, and regulatory compliance rate automatically from incident data and exercise results. Track trends, benchmark against peers, and report to the board with confidence.

Start free