The same incident, on Slack + Confluence + email versus on IR-OS.
Watch where the time, money, and senior-team attention actually go. Most teams are surprised by how much of an incident is spent searching for resources and re-litigating decisions, not making them.
Scenario: confirmed ransomware on a production fileserver, 11:00am Tuesday. Mid-market private company, US-incorporated, EU customers, payment-card data on the affected system, NY DFS in scope.
Time to Command is the gap that matters. Detection is fast - tools are abundant. Coordination is slow - and that is where most incidents bleed.
Detection is when EDR fires. Declaration is when command starts. Most organizations lose 30-90 minutes between the two debating whether it is real, who has authority to escalate, and which threshold has been crossed. The timeline below starts at Min 0 = detection, but watch what happens to the right column at Min 1 versus the left column at Min 31.
Layer
EDR alert fires.
SOC analyst sees ransomware indicator. Pings IR Lead in Slack DM.
- IR Lead is on PTO. Slack DM goes unread.
- Backup IR Lead is in a customer meeting, phone on silent.
- Analyst escalates to manager. Manager is between flights.
EDR alert fires; incident declared in IR-OS.
Hash chain starts. Append-only ledger captures every event from this moment forward.
Trying to reach the IR team.
No single contact list. Phone numbers buried in email signatures and someone's old Notion page.
- CISO mobile not answering - in a board prep meeting.
- Outside counsel main number rings to voicemail.
- Cyber insurance broker contact found in CFO's email from 2 years ago.
- PR firm contact lost. Was it Edelman? Was that last year?
IRC roles auto-paged with backups.
Incident Commander, Comms Lead, Legal Liaison, Technical Lead, Scribe, Executive Sponsor. Primary unavailable? Backup paged automatically. Outside counsel, broker, and PR firm contacts auto-attached to the incident.
Hunting for the cyber insurance policy.
"Where is the actual policy PDF?" Nobody knows. Insurance is a CFO problem most days, not a CISO problem.
- CFO assistant searches SharePoint - finds three different policy PDFs from three years.
- Are we sure the current one is in force? Renewal was March, right? Or April?
- Ransomware exclusion page nobody can read on a phone.
- First-notice clause: 24 hours? 48 hours? Different policies say different things.
- Deductible amount? Sub-limits for ransom payment? Nobody can answer.
Insurance policy + first-notice live in the platform.
Active policy parsed at upload: deductible, ransomware coverage, sub-limits, first-notice clock surfaced as a live timer. Broker and carrier contacts attached. FNOL form auto-populated from incident metadata.
Hunting for the IR plan.
"We have a plan, right?" Three people search Confluence in parallel.
- Found - last edited 14 months ago. Pre-dates the SaaS migration.
- References a SOC vendor we offboarded last year.
- Names two responders who left the company.
- "Step 4: Engage XYZ." XYZ is no longer an MSSP we use.
- Notification deadlines listed are out of date with current SEC rules.
Living IR plan, scenario-keyed, on screen.
Ransomware section auto-loaded. Mapped to NIST 800-61, ISO 27035, CISA. After-action updates write back to the plan automatically - the plan stays current.
Coordination breaks first.
Six people in Slack asking "who is doing what?" CISO is in three Zoom rooms.
- Two parallel Slack threads spawn - #incidents and a private DM group.
- "Are we calling this an incident yet?" Scope debate consumes 8 minutes.
- Nobody is sure who has authority to take production fileservers offline.
- Decisions getting made by whoever talks loudest in the call.
- Two of the six on-call responders never got the page (Slack DM mute).
Coordination tight; single source of truth.
Everyone sees the same status, same next actions, same regulatory clocks. Authority for each decision is named on the role card. CISO is making decisions, not synthesizing.
Leadership wants an update. All of them. Now.
CEO, CFO, COO, board chair, and head of HR all texting at the same time. Each wants their own version.
- CEO: needs a 1-line summary for an investor he is on a call with.
- CFO: needs cost exposure and insurance status for cash planning.
- Board chair: wants to know if the disclosure clock has started.
- COO: asking if she should pause customer onboarding.
- Comms Lead writing four custom updates by hand. Fact drift between versions within 20 minutes.
One executive update card, multiple audiences.
Same source of truth surfaces a CEO 1-liner, CFO financial exposure card, board materiality status, and COO operational impact view. All facts derived from the chain. No drift, no rewrites.
Hunt for outside counsel and PR firm contacts.
"What's the breach hotline number for the firm we use?" "Was it Munger or Latham last year?"
- Outside counsel breach line: found in old retention letter, attached to GC's deleted laptop.
- Privileged comms channel never set up. Defaulting to standard email - privilege now in question.
- PR firm: contract lapsed last year. Replacement firm not retained yet.
- Spokesperson list never finalized. CEO assumes it's the CISO. CISO assumes it's the GC.
Outside counsel + PR + spokesperson all on the role card.
Outside counsel breach line on screen. Privileged channel pre-configured. PR firm primary + backup. Spokesperson confirmed at plan creation, not improvised at minute 51.
Still trying to reach the carrier.
First-notice clock now uncertain - was it 24 hours or 48? Repeating facts to broker that were already shared in Slack.
- Broker forwards to carrier claims line. Voicemail.
- Claims adjuster requests a structured timeline. We do not have one.
- Adjuster asks for the IR plan we are following. We are not sure which version.
Carrier notified inside the platform.
FNOL submitted with structured timeline auto-attached. First-notice clock satisfied. Hash-chained confirmation captured.
Holding statement debate erupts.
Four parallel versions in flight: marketing's, legal's, CEO's, customer success's. Track-changes nightmare across email and Google Docs.
- Marketing pushes for "incident" language. Legal pushes for "potential incident."
- CEO wants to say nothing yet. CFO wants something to share with the lender on a call in 30 min.
- Three different "final" PDFs end up in three different inboxes.
- One version accidentally cc's a customer-facing distribution list. Recall sent. Recall fails.
Holding statement signed off, exported clean.
Started from "Ransomware - external holding statement" template. Legal + Comms Lead signoffs hash-chained at sha256. PDF + DOCX exported with provenance. Subscriber sends from their own domain.
Customer Success: "Which customers do we notify first?"
Nobody has a sorted list. Sorted by what - contract value, data sensitivity, regulatory exposure, prior incident history? All four matter, none are encoded.
- Top 10 customers list pulled from Salesforce - last refresh 6 weeks old.
- EU customers (GDPR exposure) flagged manually by reading account names.
- Healthcare customers (HIPAA BAA in force) - no flag, manually checked.
- Two customers under MSA breach-notification-within-24-hours clauses missed entirely.
Customer notification matrix pre-built.
Customers tagged at onboarding by GDPR exposure, HIPAA BAA, MSA notification clause, contract value. Notification order surfaces automatically. Top-tier customers and BAA holders flagged in red.
"Wait, when does the SEC clock start?"
Nobody is sure if this hit materiality. Slack debate.
- Materiality determination is being made informally by people not authorized to make it.
- Counsel asks: "Did we already determine materiality? When?" Nobody can point to the decision.
- NY DFS 72-hour clock - did anyone start it? Started from when - detection or declaration?
- GDPR Art 33 - same question. Different "awareness" definition. More confusion.
- HIPAA notification - is the affected fileserver in the BAA scope? Nobody is sure.
Five regulatory clocks running side by side.
SEC Item 1.05 materiality as a structured decision (assigned to GC). GDPR Art 33, NY DFS 72hr, HIPAA, state breach laws all live with their own start moments captured. Insurer first-notice timer also visible.
Re-litigating decisions.
Slack thread loops back: "Wait, who decided to take the fileserver offline?" Nobody can find the message.
- Decision attributed to nobody. Three people remember three different versions of the call.
- Production team angry - "you broke the SaaS environment for 4 hours, who authorized that?"
- Comms Lead asking if the holding statement is approved. Nobody has the latest version.
- HR asking if employees are getting an update. The third all-hands brief draft is on someone's laptop.
Customer notification draft + employee brief in parallel.
"US state breach notification - general" template cloned, privilege flag set to attorney-client. "Employee all-hands brief" template auto-populates from chain. Both routed to the right signoff queue.
Frantic reconstruction for the board call.
CEO needs a 1-page summary for the emergency board call at 4pm. Comms Lead and CISO scrambling.
- Scrolling through 14 Slack channels and 6 Zoom recordings to reconstruct yesterday.
- Three people produce three different timelines, off by 20 minutes on key events.
- Materiality status: unclear. Privilege status of board materials: unclear.
- Outside counsel disagrees with internal version of events on at least two decisions.
- 1-pager hits 4 pages. Then 7. Then "actually let's just talk through it."
Board emergency brief auto-populated.
Template pre-fills the timeline, signoff trail, regulatory clock status, and material impact assessment - all from the hash chain. Comms Lead reviews, GC signs off, ships in 20 minutes.
"Did we tell the carrier yet?"
CFO asks. Nobody is sure if the FNOL was officially submitted or just discussed.
- Email to broker found - but no formal claim number assigned yet.
- Carrier requests structured timeline. Now we are reconstructing it for them too.
- Out of compliance with the 24/48-hour first-notice clause? Unclear. Coverage could be at risk.
- Broker chasing IR plan version followed during the response. We do not have one to send.
FNOL claim number live on the incident card.
First-notice satisfied at minute 59. Claim number, adjuster name, and structured timeline already shared. Insurer claim handler can pull the chain on demand.
Customer breach letter v9 in track changes.
Five reviewers, three jurisdictions, two PDF mishaps. Nobody is sure which version is the latest.
- Outside counsel sends redlines via email attachment. Comms Lead applies them in Word. Marketing pushes back on tone.
- Privilege bleed: a draft is forwarded outside privileged channel. Privilege now in question.
- v9 has a typo in the carrier name. v10 fixes it but loses two earlier corrections.
- Final PDF differs from final DOCX. Subscriber unsure which one to send.
Customer letter signed off, exported clean.
Counsel approval hash-chained. PDF + DOCX exported with provenance and matching sha256. Privilege preserved by structural channel model. Subscriber sends from their own domain.
After-action review put on the calendar for "next month."
Will mostly be reconstructed from memory and Slack scrolls. Will not update the IR plan.
- Calendar invite sent for 4 weeks out. Two key responders already on PTO that week.
- "Did we send the EU residents notification?" Nobody is sure. Searching email.
- Lessons learned theatre: a slide deck nobody re-reads.
- Plan does not update. Readiness score does not change. Same gaps will hit the next incident.
Improvement + Proof. Auto-AAR generated from the chain.
8-section structured AAR: executive summary, timeline, what worked, gaps with severity, SLA compliance, regulatory status, recommendations. Recommendations write back to the IR plan automatically by Day 7. Readiness score recalculates with a measurable delta. The next tabletop scenario is auto-derived from the gaps you actually hit, so the next drill rehearses the next failure mode, not a generic one.
Three regulators want the timeline.
SEC, the state AG, and the carrier all open inquiries. Three different production formats requested.
- Forensic team reconstructs from Slack export, Confluence page history, email threads, Zoom recordings.
- Three timeline versions produced. Off by 12-40 minutes on key events.
- Privilege classification done by hand on every email - lawyers bill by the hour.
- Some Slack messages auto-deleted (90-day retention). Gaps in the timeline acknowledged in cover letter.
- Cover letter language carefully managed - any mismatch with carrier or counsel version creates new exposure.
Hand each requester one /verify URL.
Public chain verifier. No account needed. Same URL satisfies SEC, state AG, and carrier. Privileged drafts stay structurally privileged; disclosable record is hash-verified and signed.
Plaintiffs counsel sends a discovery letter.
"Produce all communications regarding the decision to pay/not pay the ransom, who authorized notification language, and how the materiality determination was made."
- No defensible chain. No structured signoff record.
- Disputes about which Slack messages were deleted vs auto-archived. Plaintiffs argue spoliation.
- Privilege log produced manually - 600 line items, multiple privilege challenges from opposing counsel.
- Outside counsel bills 200+ hours reconstructing an authoritative timeline from primary sources.
- Settlement leverage erodes because we cannot prove what we did when.
Discovery answered with the chain.
Document production happens in hours, not weeks. Privilege log auto-generated from privilege flags on the chain. No spoliation argument - the chain is append-only and tamper-evident. Settlement posture is strong.
Cyber insurer disputes the claim.
"You did not follow your published IR plan during the response. Coverage may be reduced or denied."
- Hard to argue. Nobody can prove which version of the plan was followed.
- First-notice timing in dispute - was it 36 hours? 48? Carrier says we were late.
- Forensic firm engaged was not on the carrier's panel - non-panel costs not covered.
- Ransom payment authorization process not documented to OFAC standards. SDN screening evidence missing.
- Settlement coverage offered at 60 cents on the dollar. Take it or litigate.
Cyber insurer claim approved on the chain.
Plan-followed evidence is the chain itself. Each runbook step shows completion timestamp + role + sha256 of state at completion. First-notice clock satisfied with hash-chain proof. OFAC screening logged. Panel firm used. Claim paid.
Where the actual cost lives
Without IR-OS
- ~14 min trying to reach the IR team. Half are unavailable, no backup paged.
- ~20 min hunting for the cyber insurance policy. First-notice clock running.
- ~12 min on an IR plan last edited 14 months ago, naming people who left.
- ~25 min writing four custom executive updates by hand. Fact drift within 20 minutes.
- ~22 min hunting for outside counsel and PR firm contacts. Privileged channel never set up.
- ~45 min, 4 reviewers on a single holding statement. Recall fails. Privilege bleeds.
- ~6 senior-team hours Day 1 reconstructing the timeline for the board call.
- $50K-$500K coverage at risk if first-notice was missed.
- $200K-$500K outside counsel bill reconstructing for SEC, state AG, and carrier on Day 30.
- Six-figure plaintiffs discovery cost on Day 60. Spoliation argument lands.
- $1M+ coverage shortfall on Day 90. Carrier disputes plan-followed evidence.
- Plan never updates. Same gaps hit the next incident.
With IR-OS
- 0 minutes hunting. Every resource keyed to incident type and on screen at minute zero.
- 1 signoff pass. Privilege chain captured at sha256 granularity. No re-litigation.
- One /verify URL. Same artifact answers SEC, state AG, carrier, and plaintiffs counsel.
- Document production in hours, not weeks. Privileged drafts stay structurally privileged.
- Insurance claim approved on the chain. Plan-followed evidence is the chain itself.
- Auto-AAR writes back to the plan. The next incident does not hit the same gaps.
Workstream coverage during the live incident
Real incidents are not sequential. Five workstreams run in parallel, and most response failures are cross-functional breakdowns - not technical failures. How each lane is handled:
| Workstream | Without IR-OS | With IR-OS |
|---|---|---|
| Technical containment | EDR + SOC ad hoc. No defined commander for go/no-go on production-impacting actions. | Technical Lead role assigned. Containment authority pre-defined. Actions logged to chain. |
| Business continuity | COO and CFO improvising. Customer Success guesses at notification order from a stale Salesforce list. | Customer notification matrix pre-built (GDPR, HIPAA BAA, MSA clauses tagged at onboarding). Operational impact card live. |
| Communications | Four versions of the holding statement in flight across email and Google Docs. Privilege bleeds. | 23 attorney-shape templates. Privileged channel pre-configured. Hash-chained signoffs. Watermarked SAMPLE exports. |
| Legal & regulatory | Outside counsel breach line lost. Five regulatory clocks debated in Slack with no defined start moment. | Outside counsel + carrier on the role card. SEC, GDPR, NY DFS, HIPAA, state breach clocks running side-by-side from declaration. |
| Executive decisioning | CEO, CFO, COO, board chair each get their own custom one-liner. Fact drift within 20 minutes. | Single executive update card with role-specific views. All facts derive from the chain. No drift. |
Run this same timeline against your team in 30 minutes.
The 7-day trial is the working product, not a guided tour. Pick a scenario, run a pressure drill, walk through every step on the right column. If your team is in the left column, you will know in 30 minutes.
Start 7-day trialCard required, cancel anytime before day 7 · 30-day money-back guarantee