I've Sat in Post-Incident Rooms With Irish Businesses. Here's What They All Wished They'd Done.

The coffee is cold. The air in the room is thick with a mixture of stale pizza, stress, and regret. I’ve been in this room, or rooms very much like it, more tim

The coffee is cold. The air in the room is thick with a mixture of stale pizza, stress, and regret. I’ve been in this room, or rooms very much like it, more times than I can count. Across the table, the leadership team of a solid Irish business is facing the worst week of their professional lives. They’ve been hit by a cyber attack, and the reality of the situation is starting to sink in. The conversation is always some variation of the same theme: a series of “if onlys.”

As a cybersecurity consultant, my job is often to guide companies through these crises. But a huge part of what I do is post-incident review, where we dissect what happened. It’s in these sessions, when the adrenaline has faded and the cold, hard reality of the consequences is clear, that the most painful truths emerge. The things they wished they’d done differently are always the same. They aren’t complex, they aren’t expensive, and they would have changed everything. Here are the four preventable failures that come up every single time.

Failure 1: “We Didn’t Even Know What We Had”

The Problem: In the immediate aftermath of an attack, the first question we ask is always: “What systems are affected?” The answer I often get is a shrug. The IT manager, already exhausted, will point to a server rack and say, “That’s the main stuff, I think.” The marketing director will mention a cloud service they signed up for, but they’re not sure who has the password. It quickly becomes clear that nobody has a complete picture of the company’s digital footprint.

The Consequence: You cannot protect what you do not know you have. Without a comprehensive asset inventory—a detailed list of all your hardware, software, and data—you are fighting blind. We can’t determine the scope of the breach. We can’t be sure the attackers are truly gone. We can’t even begin to calculate the potential damage because we don’t know what information was stored on the compromised systems. The recovery process is immediately stalled, costs spiral, and the board is left explaining to regulators and customers that they don’t know the extent of the data loss.

The Solution: The fix is foundational: create and maintain a technology asset inventory. This isn’t just a spreadsheet of laptops. It’s a living document that includes every server, every cloud application, every database, and every device that connects to your network. It should detail what each asset is, where it is, who is responsible for it, and what data it holds.

The Action: Start simple. Task your IT team or provider with identifying all devices connected to your network. Use a network scanning tool to find devices you didn’t know about. For a more structured approach, our Security Maturity Assessment can help you identify gaps in your asset management and other foundational controls.

Failure 2: “We Can’t See What They Did”

The Problem: Once we have a rough idea of the compromised systems, the next question is: “What did the attackers do?” The answer, far too often, is another blank stare. We go to check the system logs—the digital breadcrumbs that record all activity—and find they were either never enabled, were overwritten after a few hours, or were so full of noise as to be useless.

The Consequence: This was a core issue in the 2021 HSE ransomware attack. The attackers were inside the network for weeks, but a lack of detailed logging made it incredibly difficult to trace their every move. For the business I’m sitting with, the consequence is devastating. They can’t tell which files were accessed, which emails were read, or what data was exfiltrated. They are now in a position where they must assume the worst: that all their sensitive data—customer lists, financial records, employee PII—has been stolen. This triggers a cascade of legal obligations under GDPR, requiring them to notify the Data Protection Commission (DPC) and every single affected individual, a process that is both costly and reputationally catastrophic.

The Solution: Implement meaningful logging and monitoring. This means configuring your systems to record important events (like logins, file access, and administrative changes) and storing those logs securely where an attacker can’t easily delete them. This visibility is your digital CCTV. It’s the only way to get a reliable account of what happened during an incident.

The Action: Ensure logging is enabled on all critical systems, including servers, firewalls, and core business applications. Even basic Windows or cloud platform logs are a starting point. For a more robust solution, consider a Security Information and Event Management (SIEM) system, which centralises and analyses logs from across your entire network. This is a key component of a Zero Trust security model.

Free Resource: Download The Irish SME Cyber Survival Guide — 10 controls based on NCSC Ireland and ENISA guidance.

Failure 3: “We Just Panicked”

The Problem: The scene is chaos. The finance director is demanding to know if payroll can be processed. The sales team is asking if the CRM is safe to use. The IT helpdesk is unplugging machines at random, hoping to contain the spread. There is no chain of command, no clear plan, just a series of ad-hoc, panicked decisions.

The Consequence: In a crisis, panic is the enemy of effective response. Decisions made under pressure are often the wrong ones. Wiping a machine might destroy crucial evidence. Shutting down the wrong server could cripple a part of the business that was actually unaffected. As we saw in the aftermath of the 2023 Munster Technological University (MTU) attack, a lack of coordination can significantly prolong the disruption. Without a plan, the incident response becomes a disorganised scramble, wasting precious time, increasing the damage, and making recovery a much longer and more painful process.

The Solution: Develop a formal Cybersecurity Incident Response Plan. This document doesn’t have to be 100 pages long. It just needs to clearly define roles, responsibilities, and the specific steps to take when an incident is detected. Who makes the decisions? Who communicates with staff, customers, and regulators? What are the technical steps for containment and eradication? Having this documented before an incident is the single biggest factor in ensuring a calm, measured, and effective response.

The Action: Start by defining your incident response team. It’s not just IT; it should include representatives from management, legal, and communications. Then, walk through a few likely scenarios. What would you do in case of a ransomware attack? What about a major data breach? Documenting these basic steps is the core of your plan. For more detailed guidance, read our post on Why Every Irish SME Needs a Cybersecurity Incident Response Plan.

Failure 4: “We Didn’t Know What to Recover First”

The Problem: The immediate threat is contained, and now the focus shifts to recovery. The leadership team wants everything back online, now. But the IT team explains that restoring everything at once is impossible. They have limited resources and backups that need to be carefully validated. The question hangs in the air: “Where do we even start?”

The Consequence: Without a clear set of priorities, recovery efforts are inefficient. The team might spend a day restoring a marketing website while the core invoicing system—the one that actually generates revenue—remains offline. This inability to prioritise recovery based on business impact means the financial bleeding continues for much longer than necessary. The most critical functions that keep the business alive are not brought back first, extending the downtime and deepening the crisis.

The Solution: Conduct a Business Impact Analysis (BIA). A BIA is a formal process for identifying your most critical business functions and the technology that underpins them. It answers the question: “What parts of our business absolutely must be running for us to survive?” This analysis allows you to define your Recovery Time Objectives (RTOs)—how quickly you need a system back—and Recovery Point Objectives (RPOs)—how much data you can afford to lose.

The Action: Gather your department heads and ask them a simple question: “If we had a total outage, what processes would you need back within an hour? A day? A week?” Their answers are the foundation of your BIA. This will allow you to create a tiered recovery strategy that focuses on bringing the most critical systems online first. This is a service a vCISO can provide, giving you executive-level expertise without the full-time cost.

Your ‘Start Monday’ Checklist

Sitting in that post-incident room, the feeling of regret is palpable. But it’s also preventable. You don’t have to be the next business telling this story. Here are five things you can start on Monday to change the outcome.

  1. Schedule a 1-hour meeting to start identifying your critical assets. What software, hardware, and data does your business depend on?
  2. Ask your IT provider to confirm that logging is enabled on your main file server and firewall.
  3. Designate an incident response lead. Who is in charge when something goes wrong?
  4. Identify your top 3 most critical business functions. What absolutely has to work?
  5. Implement Multi-Factor Authentication on your email and core systems. It remains the single most effective security control.

These conversations are tough. But the conversation you have with your team before an incident is infinitely better than the one you’ll have with me after. Don’t wait for the cold coffee and the room full of regret.

Book a free 20-minute strategy call with our vCISO team. Or call us on +353 (0)87 0515 776 to discuss how we can help you build a resilient and defensible business.

Metricool analytics tracking