Technology Risk | Insights, Resources & Best Practices

Agentic AI: On the Frontier of Autonomy

Written by David Weldon | June 13, 2025

Big questions continue to swirl around artificial intelligence: how the technology arms race will pan out; its effects on jobs and productivity; the tension between regulation and innovation; the returns on investment in business applications; the attainability of artificial general intelligence.

Such uncertainties have not gotten in the way of deployments and advances in large language models and generative AI – which rocketed into the mainstream after the 2022 launch of ChatGPT – and now the spread of agentic AI, which experts say is no mere buzzword.

“It has become increasingly important to establish a clear road map for use cases that not only deliver tangible business outcomes, but also enhance enterprise resilience, foster innovation and drive growth,” says a report by research firm IDC, “The Rise of Agentic AI: A Perspective into the State of the GenAI Technology Ecosystem.”

“Agentic AI workflows transform GenAI into consumable services capable of decision-making, executing complex tasks and integrating seamlessly with existing systems,” IDC explains. “Unlike traditional zero-shot querying, agents bring context, memory, exception handling and security into the equation, making them a critical bridge between AI capabilities and real-world applications.”

The terminology and functions associated with AI assistants, copilots, even agents are not entirely new. Agentic AI implies a heightened autonomy that is viewed as especially well-suited for complex operational, compliance and control functions in the financial world, where GenAI had already seen considerable take-up.

Prominent providers of financial technology, from Broadridge and SAS to anti-fraud and transaction surveillance innovators such as Fenergo and NICE Actimize, are embracing and building out agentic AI.

David Wong of Thomson Reuters

“A future where AI systems render decisions and take action with little to no human intervention” is almost here, said the May announcement of SAS Intelligent Decisioning on the Viya AI and data platform.

“SAS Viya builds agents that don’t just act – they decide with purpose, guided by analytics, business rules and adaptability and grounded by decades of SAS’s trusted governance,” said Marinela Profi, the company’s global AI market strategy lead. Its framework “turns AI agents from a science experiment to a business differentiator.”

“A New Blueprint”

David Wong, chief product officer at Thomson Reuters, described “a new blueprint for how complex work gets done” when TR on June 2 unveiled CoCounsel for tax, audit and accounting professionals. “We’re delivering systems that don’t just assist, but operate inside the workflows professionals use every day. The AI understands the goal, breaks it into steps, takes action, and knows when to escalate for human input – all with human oversight built in to ensure accountability and trust.”

“We’re not just rebranding AI assistants,” Wong emphasized. “Full agentic systems [are] backed by trusted content, custom-trained models, and real domain expertise.”

Thomson Reuters’ offering, coming less than eight months since its acquisition of agentic  tax and accounting systems developer Materia, “sets a new bar. This is what AI looks like when it’s built with real content, trained with real experts, and trusted by the professionals who do real work.”

Agentic AI is an evolutionary step beyond both traditional AI, which tends to be rules-based and relatively static in its decision-making, and generative AI’s content creation based on “training.” The ability to take actions and/or make strategic adjustments are hallmarks of agentic AI, according to Niall Twomey, chief technology officer of Fenergo.

The client lifecycle management vendor in May launched a “FinCrime Operating System with agentic AI layer to supercharge productivity.” Six agents, with labels like “data sourcing” and “screening,” were available upon the release, with Fenergo claiming reductions in operating costs of up to 93%, in periodic review timeframes up to 45%, and 72% faster document handling.

Regulatory Pressures

“As regulations become stricter and compliance risks escalate,” Twomey says, financial firms “find themselves needing technological solutions that can rapidly adapt to regulatory shifts while maintaining accuracy and transparency.”

Amen Reghimi of RegASK

RegASK touted its regulatory alert-creation and workflow-orchestration as an agentic AI first. Specialized agents automate the most time-consuming and error-prone parts of regulatory compliance, from monitoring global regulations to assessing impact and triggering the right next steps for various teams within an organization, according to chief product and technology officer Amenallah Reghimi. Collaborating across workflows, the agents should reduce manual effort and enable faster, smarter decision-making.

“Most recently, we launched the world’s first vertical large language model purpose-built for regulatory intelligence,” Reghimi adds. “Unlike general-purpose AI models, our vertical LLM is trained on regulatory data from global agencies and industry-specific sources.

“This targeted approach enables the model to provide more accurate, relevant and actionable guidance for navigating today’s fast-changing regulatory landscape – empowering regulatory teams to make informed decisions, stay ahead of change, and manage requirements with greater ease.”

Platform Enhancements

Agentic AI in NICE Actimize’s X-Sight ActOne platform “reduces investigation time by 50% or more,” according to an April press release.

“NICE Actimize’s agentic AI significantly enriches our X-Sight platform and our portfolio of solutions delivering exceptional value throughout the financial crime ecosystem,” said CEO Craig Costigan. “By harnessing advanced machine learning, NLP [natural language processing] and GenAI, X-Sight ActOne automates processes, engaging human oversight only when essential. This empowers financial institutions to scale operations and realize transformative cost and labor efficiencies.”

Agentic-Based Compliance, launched in May by Solidus Labs, “is the only way compliance teams can stay ahead of emerging complexities like 24/7 off-platform trading, enhanced retail participation and the heightened risks they carry for cyber-enhanced financial crimes and cross-product and cross-market manipulation,” said founder and CEO Asaf Meir. “Our vision is simple: Compliance operations should be as scalable, intelligent and efficient as the markets they’re designed to protect.”

Broadridge Financial Solutions’ agentically enhanced OpsGPT “delivers real-time operational intelligence and execution, enabling firms to better manage risk, capital, and drive greater operational efficiency,” Broadridge said in May. Quentin Limouzi, global head of post-trade, stated: “In response to shortened settlement cycles, escalating operational risks and increased cost of capital, firms need to invest in simplifying complex technology ecosystems and harmonize data to enable AI-powered automation.”

High-performance database company KX on June 11 announced “general availability of its first production-grade Agentic AI Blueprint: the AI Banker Agent. Built with NVIDIA AI and specifically tailored for sell-side global markets banks, this collaboration delivers the first-of-its-kind agentic AI designed to transform how banks operate in fast-moving trading environments.”

Need for Definition

The ideal vision for autonomous AI agents is the ability to execute assigned tasks consistently and reliably by acquiring and processing multimodal data, coordinating with other agents, while building upon and learning from their experience.

“But right now, there is no clear, unified definition of what an AI agent actually is – and that’s both a strength and a weakness,” says Jim Rowan, Deloitte’s U.S. head of AI. “The flexibility gives organizations room to experiment and tailor agents to their specific needs. But it also creates misaligned expectations and makes it harder to measure impact or success.

Jim Rowan of Deloitte

“Without some internal alignment on what we mean by agentic AI, it’s tough to set benchmarks or ensure consistent outcomes. While the ambiguity fuels innovation, a shared understanding would go a long way in helping enterprises navigate this space more effectively.”

Deloitte’s Zora AI – “specialized AI agents for greater enterprise productivity and effectiveness” in a cloud subscription model – has been applied in such functions as financial statement analysis, scenario modeling, and competitive and market analysis.

Zora AI for Finance, Deloitte said in March, was to be used internally to streamline and automate expense management processes, “with targets to reduce costs by 25% and increase productivity by 40%.”

"We are entering the autonomous enterprise era where agents can transform work and business models, ushering in entirely new ways of working,” said Jason Girzadas, CEO of Deloitte US. “Our vision with Zora AI is to assist our clients in their transition into this new era, where agents and employees interact to reinvent business processes and unlock new sources of business value, growth and innovation for their organizations.”

Managing Risks and Security

In addition to automating processes and improving efficiency in a general sense, agentic AI can be a boon to risk management through continuous monitoring of data from multiple sources in real time. “Overall, agentic AI helps professionals manage risks with greater speed, accuracy and efficiency – strengthening an organization’s resilience and agility,” Deloitte’s Rowan comments.

The autonomous nature of agentic AI makes cyber risk awareness essential. “Organizations must implement robust AI security frameworks, including continuous monitoring, adversarial testing, and human-in-the-loop oversight,” asserts Reghimi of RegASK.

“AI drives 80% of ransomware attacks” is the headline finding of a 2025 MIT Sloan working paper with co-authors including Vidit Baxi and Sharavanan Raajah of cyber risk quantification and management leader SAFE Security.

As imparted in a SAFE blog on April 17, “Ready or not, AI has arrived in cybersecurity for both attackers and defenders in the form of AI agents (aka agentic AI or autonomous AI) capable of self-direction in seeking out and exploiting vulnerabilities, maneuvering around controls and even negotiating for ransom on the attack side, or identifying and containing attacks as they develop on the defense side.”

CFTC Commissioner Kristin Johnson

Also in April, SAFE launched what it called the first fully autonomous third-party risk management platform, “built on a system of specialized AI agents that automate the entire vendor risk lifecycle, from risk assessments and onboarding to continuous monitoring. This ‘agentic workflow’ delivers true zero-effort TPRM, enabling organizations to move faster, scale with confidence, and make smarter risk decisions – instantly and autonomously.”

“Distinct from GenAI”

Agentic AI caught the attention of the Commodity Futures Trading Commission’s Kristin N. Johnson. In a May 29 speech at the Federal Reserve Bank of Dallas, she said “agentic AI builds upon GenAI in every discernable way . . . by being distinct from GenAI in four ways: a focus on action and decision-making rather than creating synthetic data and content; removal of the necessity to continuously input prompts; an ability to act independently to carry out activities and tasks within its parameters; and, compared to GenAI whose programs are static once trained, the ability to continuously change and remain dynamic by adjusting to data and learning from its own mistakes.

“But with every great opportunity comes risk,” continued Johnson, a CFTC commissioner since 2022 and sponsor of its Market Risk Advisory Committee, who has announced that she will step down this year.

(Brian Quintenz, nominated this year to be CFTC chairman, was a commissioner from 2017 to 2021 and became head of policy for venture capital firm Andreessen Horowitz’s a16z Crypto. It led a seed investment in Catena Labs, which is building “agentic commerce” into its “regulated AI-native financial institution” model.)

“Outputs are only as good as inputs,” Johnson said, “meaning, if the training model data is biased, incomplete, or otherwise compromised, agentic AI outputs may be similarly inadequate.

“Perhaps more immediately concerning for regulators who are cops on the financial markets beat, as the potential for positive, efficient, market-enhancing use cases grow, so too does the potential for misuse of the same technology by bad actors,” the commissioner added.

While “agentic AI suffers some of the same vulnerabilities and risks to that of GenAI” – privacy, bias and fairness concerns, model inference attacks – “other risks that should be carefully considered as agentic AI models are integrated into our markets include the limitations of synthetic data, data leakages, data integrity, data security, data privacy, ethical concerns, the absence of a human in the loop, security vulnerabilities (hijacking or exploitation) and accountability, among others.”

 

Jeffrey Kutler of GARP contributed reporting for this article.