Someone Called Your Accountant Pretending to Be You. They Sounded Exactly Right.

Criminals now clone voices from LinkedIn videos and call your staff posing as you. How Irish SMEs can protect against AI voice cloning fraud.

Someone Called Your Accountant Pretending to Be You. They Sounded Exactly Right.

The call came in on a Tuesday afternoon. The voice was unmistakable — same tone, same rhythm, same way of getting straight to the point. It was the owner, asking for an urgent wire transfer to a new supplier. The accountant processed it without question. Why would they question it?

The owner had been on a golf course in Donegal all day. They had not made any calls.

This scenario is no longer theoretical. AI voice cloning has matured to the point where a convincing replica of a specific person's voice requires only a few seconds of source audio — the kind that exists on LinkedIn videos, company website testimonials, podcast appearances, or even a voicemail greeting. The technology is accessible, inexpensive, and producing fraud losses that are escalating across Irish and European businesses in 2026.


What Is AI Voice Cloning Fraud?

AI voice cloning fraud is a form of impersonation attack in which criminals use artificial intelligence to replicate a specific person's voice, then use that replica to deceive employees into authorising payments, sharing credentials, or disclosing sensitive information.

It is an evolution of business email compromise — the well-established fraud category that has cost Irish businesses tens of millions of euros in recent years. The difference is that email filters, anti-spoofing controls, and staff awareness have made convincing email fraud harder to execute. Voice calls bypass all of those defences. Your brain is not wired to distrust a familiar voice the same way it has learned to distrust suspicious emails.


What Is Happening Right Now

  • Business email compromise losses exceeded $2.9 billion globally in 2025, according to the FBI's Internet Crime Report — and voice-augmented attacks now represent a growing share of that figure [^1]
  • Criminals require as little as three seconds of audio to produce a working voice clone using commercially available AI tools
  • Source audio is routinely harvested from LinkedIn posts, company websites, YouTube interviews, podcast appearances, and AGM recordings — material that most business owners publish voluntarily
  • Finance teams and accounts payable staff are the primary targets because they have direct authority to move money
  • An Garda Síochána's National Cyber Crime Bureau issued updated guidance in early 2026 warning Irish businesses about the escalation of AI-augmented social engineering attacks [^2]

The attack does not require technical sophistication. The tools are available as consumer subscriptions. A fraudster needs a recording of your voice, a script, and a phone number for your accounts team.

Does your finance team have a verbal verification protocol for payment requests that come by phone — even from a familiar voice? If not, that gap is your current exposure. Book a free 20-minute strategy call — we can help you close it quickly.


Why Voice Works Where Email Has Failed

Spend a moment thinking about how your accounts team handles email requests for payment. They check the sender address. They look for unexpected domain variations. They have probably been trained to pause before clicking links. Many businesses have two-person authorisation rules for transfers above a threshold. These controls exist precisely because business email compromise became such a well-understood threat.

Voice calls operate on entirely different psychology. When a call arrives from what sounds like the managing director — same voice, same speech patterns, same way of saying your name — the instinct to comply is immediate and powerful. The attacker typically adds urgency: the supplier needs payment today or the deal falls through, the invoice needs processing before the bank closes, this cannot wait. Urgency and authority combined are the most reliable tools in social engineering.

There is currently no widely deployed real-time audio deepfake detection technology. Human ears are unreliable judges. The audio artefacts that marked early voice cloning — a slight robotic quality, unnatural pauses — are increasingly absent from modern tools. A Sligo engineering firm, a Letterkenny accountancy practice, or a Donegal food producer is unlikely to have any technical defence against a convincing voice clone call arriving in the middle of a busy afternoon.


The Scale of the Risk

The attack targets the specific intersection of authority and access. Finance staff and directors are the two ends of this equation. The fraudster impersonates the authority figure to reach the person with access.

For a small or medium business, the consequences of a single successful attack can be severe. Wire transfers are difficult or impossible to reverse once processed. Supplier account details changed as part of the attack will redirect legitimate future payments. The reputational damage of disclosing a fraud of this kind to clients or partners compounds the direct financial loss.

Under Irish company law and GDPR, a business that loses client funds or exposes client data as a result of a successful fraud may face regulatory scrutiny regardless of whether the cause was a sophisticated AI attack. The obligation is on the business to have had reasonable controls in place.


Why This Matters to Your Business Right Now

The barrier to entry for this attack is lower than it has ever been, and it is falling further every month. A voice clone that would have required specialist equipment and weeks of processing in 2023 can now be produced in minutes on a consumer laptop.

The defining feature of this threat is that it specifically targets the human controls your business already has in place. Staff who would correctly distrust a suspicious email will not distrust their managing director's voice. Any business that has improved its email security without updating its voice verification procedures has, inadvertently, made itself a relatively more attractive target. This is not a technical problem. It is a procedural one — and procedural problems have straightforward solutions.


What Next — Three Controls You Can Put in Place This Week

  1. Establish a verbal codeword system for financial authorisations. Agree a rotating phrase or question with your finance team that only the genuine caller would know. It changes weekly. Any payment request by phone that does not include the correct response is not processed until independently verified by a second method.

  2. Never verify via the same channel as the request. If the request comes by phone, verification must happen through a different, pre-stored contact — not by calling back the number that just called you, not by replying to an email that arrived alongside the call. Call the managing director on their known mobile. Walk to their office. The verification must be genuinely independent.

  3. Brief your finance staff specifically on this threat. Most cybersecurity awareness training focuses on phishing emails and suspicious links. It almost never addresses AI voice cloning. A 20-minute conversation with your accounts team about this specific attack pattern — what to listen for, what to do when a payment request arrives by phone — costs nothing and closes the gap immediately.


Ready to find out exactly where your business stands? Book a free 20-minute strategy call with our vCISO team at www.pragmaticsecurity.ie/book-a-call. No sales pitch. No jargon. Just clarity on your cyber risk — and a clear plan to address it.

Related Reading

[^1]: FBI Internet Crime Report 2025 [^2]: An Garda Síochána — National Cyber Crime Bureau [^3]: National Cyber Security Centre Ireland — Social Engineering Guidance

Pragmatic Security — Cybersecurity advisory for Irish businesses. Based in Donegal, Ireland. CISA, CISSP, CISM certified advisors.

Metricool analytics tracking