When a Donegal solicitor's firm discovered that three members of its conveyancing team had been using a free AI tool to draft client correspondence, the partners were not angry about the productivity shortcut. They were alarmed about what had been fed into it. Contract summaries, client names, property valuations, correspondence with the Land Registry — all of it had been entered into a public AI tool hosted outside the EU, by a company whose terms of service explicitly stated that user inputs could be used to improve the model. The firm had no idea it was happening. No policy covered it. No monitoring could have detected it. And until that moment, no one had thought to ask. That situation — shadow AI — is now one of the most common and least visible data risk issues in Irish professional services, healthcare, and financial businesses.
What Shadow AI Is
Shadow AI is the use of artificial intelligence tools in a business without the knowledge, approval, or oversight of the organisation's leadership or IT function. It mirrors the earlier concept of shadow IT — where staff used personal cloud storage, personal email, or unauthorised software to do their jobs more efficiently — but with an important difference in scale. AI tools can process, synthesise, and potentially retain large volumes of sensitive information in a single interaction. The risk from shadow AI is therefore faster and potentially more damaging than equivalent shadow IT.
A recent survey of Irish businesses found that approximately 30% of employees report using AI tools without IT approval. The actual figure is almost certainly higher, because the same employees who would not report using an unapproved tool in a survey are unlikely to do so to their manager either. The tools in question range from large language models used for drafting documents and emails, to AI-powered data analysis tools used to process financial or operational data, to AI coding assistants used by developers working on client-facing applications.
The NCSC Ireland has included AI tool governance in its updated guidance for organisations, noting that the use of external AI services creates data residency, confidentiality, and supply chain security risks that need to be explicitly addressed in an organisation's security policies.[^1]
Does your business have a written policy on AI tool usage, and do your staff know which tools are approved and which are not? Book a free 20-minute strategy call — we will help you assess your current AI risk exposure and build a practical, proportionate AI governance policy.
The Three Risk Categories
The first category is data protection risk. When a staff member enters client data, patient records, legal correspondence, or financial information into a public AI tool, they are transferring that data to a third-party system — often hosted outside the EU, with data retention and processing terms that the employee almost certainly has not read. The Data Protection Commission in Ireland has made clear that organisations are responsible for ensuring that personal data processed on their behalf by third parties is subject to appropriate contractual and technical safeguards.[^2] An AI tool used without a data processing agreement in place, and without verification of its data residency and retention practices, will almost always fail that test.
The second category is regulatory compliance risk. For businesses in scope for NIS2 — including those in digital services, healthcare, and professional services — the use of unsanctioned external tools to process business data creates a supply chain security gap that the directive explicitly requires to be managed. NIS2 mandates that covered entities assess and control the cybersecurity posture of their technology suppliers, including cloud and AI service providers.[^1] A tool that no one in the organisation knows about cannot be assessed, controlled, or included in incident response planning.
The third category is intellectual property and confidentiality risk. Many AI services' terms of service include provisions that allow user inputs to improve or train models. If a staff member enters a client's unpublished business plan, a proprietary product formula, or a confidential legal strategy into one of these tools, that information may no longer be confidential in any meaningful sense. In a professional services context — legal, financial, engineering, consulting — this exposure can have serious commercial and legal consequences, including breach of client confidentiality obligations.
Why Staff Use Unapproved Tools
Understanding why shadow AI happens is more useful than simply prohibiting it. The most common driver is the same one that has always driven shadow IT: staff are trying to do their jobs more effectively and the approved tools are slower, more cumbersome, or simply absent.
If your organisation has no approved AI tools and staff see AI tools making colleagues at other firms more productive, they will find a way to use them. The answer is not simply a prohibition. It is an approved pathway: a vetted set of AI tools, with appropriate data handling agreements, that staff can use without creating the risks described above. An Garda Síochána's National Cyber Crime Bureau has noted that over-restrictive technology policies, without approved alternatives, push employees toward informal workarounds that create security blind spots.[^3]
The practical approach for most Irish SMEs is to designate two or three approved AI tools that have been assessed for data handling practices, verified to meet EU data residency requirements where relevant, and covered by appropriate terms of service. Communicate those approvals clearly to staff. Explicitly prohibit the use of unapproved tools for business data. And review the approved list at least annually, because the AI tool landscape is moving faster than any annual review cycle can fully capture.
Building Your AI Governance Policy
An AI governance policy for a small Irish business does not need to be complex. It needs to cover three things clearly. First, which tools are approved for use with business data, and under what conditions. Second, what categories of data should never be entered into any AI tool without explicit approval — client personal data, legal correspondence, financial records, patient information. Third, how staff should report if they discover they have inadvertently used an unapproved tool with sensitive data.
That third point — the reporting pathway — is critical. A culture where staff are afraid to disclose accidental policy breaches means that data exposures remain hidden rather than being assessed and addressed. The goal is early disclosure and rapid assessment, not punishment.
Shadow AI is not primarily a technology problem. It is a governance problem — the absence of a clear, communicated policy on which tools are approved and what data can be used with them. Governance problems are faster and cheaper to fix than their consequences.
Three Actions to Take This Week
1. Ask your team what AI tools they are currently using. Do this informally and without consequence — the goal is to understand your current exposure, not to create a compliance incident. Most staff will be honest if they believe the question is about improving the situation rather than assigning blame.
2. Write a one-page AI tool policy. It needs to list the approved tools, state that business data may only be processed using approved tools, and identify the categories of data that require specific approval before use with any AI tool. Circulate it to all staff.
3. Identify at least one approved AI tool for the most common use case in your business. If staff are using AI for document drafting, find an enterprise version of an appropriate tool that includes appropriate data handling commitments, sign the relevant data processing agreement, and make that the approved option. Removing the gap between "what staff want to use" and "what is approved" is what prevents shadow AI from recurring.
Related Reading
- Securing Remote Work: Best Practices for Irish Hybrid Teams
- NIS2 Supply Chain Requirements for Irish SMEs
- Building a Human Firewall: Security Awareness Training That Works
[^1]: NCSC Ireland, cybersecurity guidance for organisations on AI and supply chain risk: https://www.ncsc.gov.ie/advice-for-organisations/ [^2]: Data Protection Commission, guidance on third-party data processing and AI: https://www.dataprotection.ie [^3]: An Garda Síochána, National Cyber Crime Bureau cybercrime resources: https://www.garda.ie/en/crime/cyber-crime/
Pragmatic Security — Cybersecurity advisory for Irish businesses. Based in Donegal, Ireland. CISA, CISSP, CISM certified advisors.