When a Dublin-based HR technology firm began piloting an AI-powered recruitment screening tool in late 2024, its founders assumed the EU AI Act was a concern for large technology companies, not a ten-person startup selling into the Irish market. Six months later, their legal team confirmed the tool almost certainly qualified as a high-risk AI system under the Act — meaning the firm faced significant compliance obligations it had not budgeted for, planned around, or even known existed. The story is becoming common across Ireland.
The EU AI Act entered into force in August 2024. It is the world's first comprehensive legal framework for artificial intelligence, and it applies directly to Irish businesses that develop, deploy, or use AI systems in their operations. Over 70 percent of Irish SMEs are already exploring or implementing AI solutions. The question is no longer whether the Act affects your business — for many, it clearly does — but whether you understand which obligations apply and what you need to do about them.
WHAT: The Risk-Based Framework
The EU AI Act adopts a risk-based approach, dividing AI systems into four categories. The category your AI falls into determines your compliance obligations.
The highest category — unacceptable risk — consists of AI systems that are simply prohibited within the EU. These include systems that manipulate human behaviour in ways that cause harm, social scoring by governments, and most forms of real-time biometric identification in public spaces. Penalties for deploying prohibited systems can reach €40 million or 7 percent of global annual turnover.
High-risk AI systems carry the most extensive compliance obligations. This category includes AI used in recruitment and HR decisions, credit scoring, education management, access to essential services, and critical infrastructure. If you are using AI to screen job applications, assess creditworthiness, or manage patient triage, you are almost certainly in this category. High-risk system operators must maintain comprehensive technical documentation, implement robust data governance, provide for human oversight, and undergo conformity assessment before deployment.
Limited-risk systems — primarily chatbots and AI-generated content — face lighter obligations centred on transparency. You must tell users they are interacting with an AI. Most customer-facing chatbots fall here.
Minimal-risk systems, such as spam filters and basic recommendation engines, face no specific obligations under the Act, though voluntary codes of conduct are encouraged.
Does your business use AI tools in customer interactions, HR, or financial decisions? Book a free 20-minute strategy call — we help Irish SMEs assess their EU AI Act exposure and plan a proportionate response.
WHAT NOW: What Compliance Actually Requires
For Irish businesses operating high-risk AI systems, the compliance journey begins with a thorough AI audit. This means identifying every AI system you use — not just systems you have built, but third-party tools you have deployed — and classifying each one against the Act's risk categories. The classification determines everything else.
High-risk systems require a risk management process that runs throughout the system's lifecycle. You must document it, test it, and update it as the system evolves. Data governance is central: training data must be relevant, representative, and free from errors that could lead to discriminatory outcomes. The Data Protection Commission in Ireland has signalled that AI-related data protection issues will be a focus area for the coming years, and the GDPR obligations around automated decision-making intersect directly with AI Act requirements.[^3]
Human oversight is not optional for high-risk systems. The Act requires that humans can intervene, override, and correct AI-driven decisions in high-stakes contexts. This has practical implications for how you build AI workflows into your operations and how you document those workflows for regulators.
NCSC Ireland has published specific guidance on cybersecurity risks associated with generative AI, particularly for public sector bodies and organisations handling sensitive data.[^1] The core message is that AI systems are attack surfaces. Model poisoning, prompt injection, and adversarial inputs are real threats, and cybersecurity requirements are built into the EU AI Act's high-risk provisions explicitly.
WHY IT MATTERS: The Irish Regulatory Picture
Ireland is actively building out its AI regulatory infrastructure. A National AI Office is being established as Ireland's central coordinating authority for EU AI Act implementation, expected to be operational by late 2026. The Competition and Consumer Protection Commission is also engaged with AI-related consumer protection issues. For businesses operating in regulated sectors, multiple Irish and EU bodies may have oversight interest in how you deploy AI.
For Irish SMEs, the practical risk is not primarily the headline penalties. It is the operational disruption of discovering non-compliance when a client due diligence process, a regulatory inquiry, or an insurance claim forces the issue. The reputational damage of being found to have deployed a high-risk AI system without the required documentation and oversight processes is significant, particularly in markets where trust is a competitive asset.
An Garda Síochána has also flagged AI-generated fraud — deepfakes, synthetic voice calls, AI-generated phishing — as a growing concern for Irish businesses.[^2] This is the other side of the AI risk picture: not just the systems you deploy but the AI-powered attacks you face.
The businesses that treat EU AI Act compliance as a burden will spend more and gain less than those who treat it as a prompt to understand and govern the AI they are already using.
WHAT NEXT: Three Practical Steps
Conduct an AI inventory this month. List every AI tool your business uses, including productivity tools, customer-facing chatbots, and any HR or financial decision-support systems. You cannot assess your obligations until you know what you have.
For each system on your list, ask one question: does this system make or support decisions that affect people's access to services, employment, credit, or healthcare? If yes, take legal or specialist advice on whether it qualifies as high-risk under the Act.
Begin building your documentation. Even if your compliance journey is in early stages, a documented AI governance framework — covering what systems you use, for what purpose, with what oversight — demonstrates good faith to regulators and clients. A virtual CISO can help structure this work efficiently without the cost of a full-time hire.
Related Reading
- AI Threat Landscape for Irish SMEs in 2026: What Has Changed
- 12-Month Cyber Governance Roadmap for Donegal SMEs
- Building a NIS2 Compliance Roadmap: A 12-Month Plan for Irish SMEs
[^1]: NCSC Ireland. Advice for Organisations. https://www.ncsc.gov.ie/advice-for-organisations/ [^2]: An Garda Síochána. Cyber Crime. https://www.garda.ie/en/crime/cyber-crime/ [^3]: Data Protection Commission. Guidance for Organisations. https://www.dataprotection.ie
Pragmatic Security — Cybersecurity advisory for Irish businesses. Based in Donegal, Ireland. CISA, CISSP, CISM certified advisors.