Deep Layer Security Advisory
Awareness2026-03-09

What Happens in the First 24 Hours of a Cyber Breach?

Part of the Incident Response Deep-Dive Guide

The first 24 hours after discovering a cyber breach are defined by chaos, incomplete information, and high-stakes decisions made under extreme pressure. Every minute counts, yet rushing without a structured approach leads to evidence destruction, expanded blast radius, and communication failures that compound the damage. Organizations that survive breaches with their reputation and operations intact almost always share one trait: they knew what to do before the clock started ticking.

This article walks through the critical first 24 hours of a cyber breach in four phases, outlining what should happen at each stage, the key decisions responders face, and the mistakes that most commonly derail an effective response. Whether you are a CISO building readiness or an IT leader who wants to understand what a real incident looks like, this timeline provides a realistic picture of breach response in practice.

Hours 0-1: Detection, Validation, and Initial Triage

The first hour is about confirming that you actually have an incident and mobilizing the right people. Detection may come from an EDR alert, a SIEM correlation, a user report, or an external notification from law enforcement or a threat intelligence provider. The immediate task is validation: is this a true positive? What is the initial scope? A single compromised endpoint behaves very differently from a domain controller showing signs of lateral movement, and the response posture must match the threat.

During this window, the incident commander should be identified and given authority to make decisions. This is not the time for committee-based governance. The commander activates the incident response team, establishes an out-of-band communication channel (since the adversary may be monitoring corporate email and chat), and begins documenting every action in an incident log. One of the most consequential decisions in this hour is whether to immediately contain or to observe. Premature containment can tip off the attacker, causing them to accelerate destruction or deploy additional persistence mechanisms. Insufficient containment lets the adversary continue moving laterally. The right answer depends on the threat type, the apparent stage of the attack, and the organization's risk tolerance.

Critically, the response team must also begin preserving evidence from the outset. Volatile data such as running processes, network connections, and memory contents disappears the moment a system is rebooted or reimaged. Forensic images of affected systems, log snapshots, and network traffic captures should be initiated immediately, even before the full scope is understood. Organizations that skip this step in the rush to contain often find themselves unable to answer basic questions later: how did the attacker get in, what did they access, and are they still present?

Hours 1-4: Containment and Evidence Preservation

Once the incident is validated and initial triage is complete, the focus shifts to containment. The goal is to limit the adversary's ability to move laterally, exfiltrate data, or cause further damage without destroying evidence or disrupting business operations more than necessary. Containment strategies vary by incident type. For ransomware, network segmentation and isolating affected segments from the broader environment is critical. For a business email compromise, disabling compromised accounts and revoking OAuth tokens takes priority. For data exfiltration, blocking egress to known command-and-control infrastructure may be the first move.

This phase exposes whether an organization's security tooling is actually operational. Can you isolate endpoints remotely through your EDR platform? Do you have network segmentation that can be activated under pressure? Can you revoke credentials and force re-authentication across the environment without bringing business operations to a complete halt? Many organizations discover during a real incident that the capabilities they assumed they had are either misconfigured, partially deployed, or require manual steps that take hours. The containment phase also forces difficult business decisions: shutting down a production system to stop lateral movement has real revenue and operational impact, and someone with appropriate authority must make that call quickly.

Parallel to containment, dedicated team members should be conducting detailed evidence preservation. Full disk images of compromised systems, memory dumps, firewall and proxy logs, authentication logs, and cloud audit trails all need to be collected and stored with chain-of-custody documentation. If the breach involves regulated data or may result in litigation, the evidence handling procedures used now will be scrutinized later. Engaging a forensic firm at this stage, if one is not already on retainer, is advisable. Many cyber insurance policies require the use of a panel-approved forensics firm, and starting with an unapproved firm can create coverage complications.

Hours 4-12: Investigation and Scope Assessment

With containment measures in place, the investigation phase begins in earnest. The central questions are deceptively simple: how did the attacker gain initial access, what systems and data were affected, and is the adversary still present in the environment? Answering these questions requires correlating evidence across endpoints, network logs, identity systems, and cloud platforms. Investigators trace the attack chain from the initial point of compromise through lateral movement, privilege escalation, and any data access or exfiltration. This is painstaking work that cannot be rushed, but preliminary findings are essential for informing the decisions that come next.

Scope assessment during this phase directly drives downstream obligations. If the investigation reveals that personally identifiable information, protected health information, or payment card data was accessed or exfiltrated, regulatory notification timelines begin. Under GDPR, the 72-hour clock starts from the moment the organization becomes aware of a breach involving personal data. Several U.S. state laws impose similarly aggressive timelines. The scope assessment also informs whether the breach triggers contractual notification obligations to customers, partners, or vendors. Getting scope wrong in either direction is costly: underestimating scope delays necessary notifications and increases regulatory risk, while overestimating scope triggers unnecessary panic and expense.

At this stage, the response team should also be assessing whether the attacker has established persistent access mechanisms such as backdoors, scheduled tasks, rogue accounts, or compromised service principals. Containment without eradication is temporary. If the adversary has implanted persistence that the team has not identified, they will simply re-enter the environment once containment measures are relaxed. This is why thorough investigation must precede recovery, no matter how urgently business stakeholders want systems restored.

Hours 12-24: Notification, Communication, and Recovery Planning

The second half of the first day shifts focus toward communication and recovery planning. Internal stakeholders, including executive leadership, the board, legal counsel, and business unit leaders, need clear, accurate briefings on what is known, what is still being investigated, and what decisions need to be made. This is where pre-built communication templates and escalation matrices prove their value. Organizations that have not prepared these materials spend precious hours wordsmithing emails and debating who should be told what, while the response team waits for authorization to act.

External communications are even more consequential. Depending on scope findings, the organization may need to notify regulators, affected individuals, law enforcement, and cyber insurance carriers within specific timeframes. Each audience requires different messaging: regulators want facts and evidence of a structured response, affected individuals need clear language about what happened and what they should do, law enforcement needs technical indicators of compromise, and insurance carriers need formal notice that triggers coverage. Missteps in external communication, whether through premature public statements, incomplete regulatory notifications, or delayed insurance notice, create legal and financial exposure that can exceed the direct cost of the breach itself.

Recovery planning should begin in parallel, even though actual recovery may not start for days. The recovery plan must account for the investigation findings: you cannot simply restore from backups if those backups may also be compromised. The plan should define a clean restoration sequence, identify which systems are business-critical and should be restored first, and specify validation steps to confirm that restored systems are free of adversary persistence. The first 24 hours do not end the incident; they set the foundation for everything that follows. Organizations that handle this window well emerge from breaches faster, with lower costs, less regulatory exposure, and their stakeholder trust largely intact.

Key Takeaways

Designate an incident commander with clear decision-making authority in the first hour; committee-based governance slows response when speed matters most.
Begin evidence preservation immediately, before containment actions alter or destroy volatile forensic data like memory contents and active network connections.
Scope assessment drives regulatory notification timelines, so preliminary investigation findings must be accurate enough to inform legal obligations by hours 12-24.
Recovery cannot begin until the investigation confirms that attacker persistence mechanisms have been identified; restoring systems prematurely risks reinfection.