The United States is at a compliance crossroads. Firms are wary after a recent history of substantial fines for failing to implement the right communications compliance frameworks, and regulators like FINRA are considering options to ensure that their oversight of communications compliance risk continues to be effective and efficient.
While the threat of off-channel communications fines and enforcements persists, more concerningly, compliance and surveillance professionals remain hesitant to leverage effective tools to bolster their strategies. In Global Relay’s Industry Insights: Compliant Communications 2025 report 56.3% of North American survey respondents have no plan to introduce AI into compliance workflows in the next year – a stark contrast to the 71.4% of EMEA respondents who intend to.
Hesitancy to adopt AI is driven by multiple factors, including poorly performing historical models, regulatory uncertainty and high costs. However, it’s essential for firms to understand the underlying risk factors that contribute to inaction – and strategically and effectively implement the right tools – to ensure communications compliance is future-proof.
Failure to implement AI solutions to streamline communications compliance workflows, identify threats and strengthen surveillance partially stems from the history with AI and the baggage that accompanies it. Many early models performed poorly, resulting in skewed algorithms, faulty data sets, inaccurate recommendations and wasted investment. With 42% of businesses scrapping AI initiatives this year, it’s unsurprising compliance teams are approaching AI with skepticism.
AI introduces risks if not properly maintained. For example, in April 2025, Anthropic’s legal AI chatbot, Claude, used a false citation in a filing – not only putting the company’s defense strategy at risk, but also highlighting how, without the proper guardrails in place, AI can introduce unintended damages.
In a separate test at the U.K.’s AI Safety Summit, a GPT-4 model used insider information to make an illegal financial trade and then lied about it, underscoring the risk of AI systems acting deceptively without instruction.
Global Relay’s Don McElligott: “A force multiplier for risk functions.”
Beyond the historical challenges of AI, many firms remain cautious about its potential due to regulatory uncertainty. Organizations may choose not to adopt AI because they are unaware or unsure of the regulations surrounding it (if any), largely driven by unclear AI governance guidelines. Both the Financial Conduct Authority (FCA) and Securities and Exchange Commission (SEC) have recently taken tech-neutral and pro-innovation approaches, prioritizing growth and innovation over prescriptive rule setting.
Finally, investing in AI is not as simple as a plug-and-play solution, and it’s not always kind to firms’ overall spend. Implementing AI requires a thoughtful, strategic approach. Even with the obvious business benefits, many firms still question the ROI of AI in risk reduction.
Despite this caution, AI has an essential role to play in compliance supervision – and knowing how to deploy it at scale is the missing step that many firms fail to execute.
AI’s demonstrated effectiveness in reviewing colossal amounts of information via large language models (LLMs) and natural language processing (NLP) make it a powerful tool in compliance functions like keeping up with changing regulatory frameworks, monitoring financial transactions for fraud, and surveillance of corporate communications.
While many financial institutions and banks have historically relied on banning channels like WhatsApp to prevent off-channel communications, and over 40% still do, this provides a false sense of security and a largely ineffective solution. Taking a prohibitive approach to communications surveillance can result in the emergence of shadow IT – the use of unauthorized devices – and pushing communications off-channel, increasing the risk of operational and regulatory exposure.
Instead of relying on banning communication channels, firms should deploy AI to enhance surveillance, automate anomaly detection and reduce manual oversight burdens. AI acts as a force multiplier for risk functions; it improves precision, speed and coverage in supervisory practices, and has the potential to identify the cause of a threat instead of only identifying the threat itself.
While understanding the value of leveraging AI for communications surveillance is important, knowing that AI is only as good as the data that it draws from is paramount for success.
Before deploying AI at any level, surveillance teams must collaborate with IT and data teams to ensure that there is a reliable and consistent data collection and organization process in place. This verifies that the data is standardized, accurate and context-rich, and is the initial steppingstone for effective AI execution.
Data integrity exists hand-in-hand with data security at all touch points in the communications compliance journey. Protecting data through secure collection and storage is essential, such as hosting data within a private data center or private cloud to prevent breaches.
An often-forgotten component of data security is monitoring connector technology. When messaging data is captured from various sources and platforms, the numerous endpoints involved in transmitting that data can create vulnerabilities, increasing the risk of breaches and unauthorized access. It is inefficient to use AI to sift through all this material.
Instead, IT teams should leverage software and tools that link different systems in a secure way with proper encryption and authentication to monitor for metadata and provide intelligent structure into the message at the point of capture.
After properly collecting, organizing and securing the data at all touch points, teams can consider the next step: execution. But deploying AI is not a turnkey solution, and this is usually where teams go wrong and reintroduce risks. To ensure that AI deployment strategies are risk-informed, the following steps are key:
1. Start small and scale: It’s important to pilot AI in individual use cases to understand how the technology truly impacts your business, and where it presents the most value. Taking a company-wide approach often results in wasted ROI, time, and resources, whereas a more individualized, targeted rollout can garner the strongest results.
2. Bake in AI governance: Surveillance teams should establish a governance structure to align AI projects with existing risk and compliance frameworks – defining accountability, explainability, and auditability upfront.
3. Collaborate across teams: The alignment of compliance, IT, legal, risk and leadership teams is pivotal in ensuring successful AI adoption and deployment. By aligning all key stakeholders, teams can safeguard against unintended consequences and identify valuable pain points.
4. Prioritize data readiness and protection: AI’s value stems directly from the quality of data that is used to drive it, and this data must remain secure and well-structured to ensure consistent and accurate analysis and results.
5. Empower Chief Surveillance Officers to adopt higher-value tasks: Educating Chief Surveillance Officers on the power of AI provides them with the necessary information and tools to use it effectively. They can leverage AI as a copilot, taking over menial tasks and refocusing on higher-level, more strategic projects. By understanding the risks and value, surveillance teams can reallocate time and resources to initiatives that drive the business’s end goals and increase risk posture.
As President Harry S Truman once said, “There is some risk involved in action, there always is. But there is far more risk in failure to act.”
As surveillance teams evaluate the impact of AI in creating compliant workflows and monitoring off-channel communications, they must first shift their mindsets – and then shift their processes.
AI has the power to introduce stronger threat identification, clearer contextual understanding of risk, and more valuable Chief Risk Officers. With the right governance strategies in place to ensure AI works effectively, surveillance teams can reap the benefits. The time to build risk-aware, governance-driven AI frameworks is now.
Donald McElligott is vice president, compliance supervision, at communication compliance solutions provider Global Relay. He previously worked for CA on Data Minder, formerly known as Orchestria, where he led the technical sales team.