Wherever data is aggregated and analyzed, for risk management and other decision support, models are omnipresent. Artificial intelligence only underscores their strategic and operational importance as well as the controls and governance that these tools require.
This begins to explain why the Snowflake AI data cloud, the Securitize asset tokenization platform, and London Stock Exchange Group’s LSEG Everywhere AI offering – among many others – have the Model Context Protocol in common.
The MCP is open-source and being implemented across the technology landscape as a bridge across AI capabilities and compliance frameworks, datasets and application programming interfaces (APIs). One explainer likens it to a USB-C port: “Just as USB-C provides a standardized way to connect electronic devices, MCP provides a standardized way to connect AI applications to external systems.” For example, “Enterprise chatbots can connect to multiple databases across an organization, empowering users to analyze data using chat.”
As LSEG stated in a joint announcement with its technology partner Microsoft, “Using a new MCP server, LSEG customers will be able to connect with data, licensed through LSEG products like Workspace and Financial Analytics, to build AI agents in Copilot Studio . . . that can be integrated directly into workflows through Microsoft 365 Copilot and other applications.
“Copilot Studio enables makers to build sophisticated agents with ease, using frontier AI models in a fully managed SaaS experience that supports a wide range of enterprise connectors. Copilot Studio empowers organizations with robust governance controls and is natively integrated into Microsoft 365 Copilot, enabling secure and compliant customization at scale.”
Source: modelcontextprotocol.io
Also in recent months, Securitize introduced its MCP Server as a “gateway to tokenized asset data”; and Snowflake announced managed MCP servers enabling connections with agentic applications from providers such as Anthropic, CrewAI and Cursor.
MCP security startup Runlayer – a founding member with Anthropic, OpenAI and others of the Agentic Artificial Intelligence Foundation – obtained $11 million in seed funding. And in Australia, Carrington Labs launched what it called the “first MCP server bringing compliant credit models into AI lending workflows.”
Asset management systems provider Finbourne Technology leveraged MCP and Anthropic’s Claude chatbot to enable “secure, permission-aware AI agents to access live investment data, automate workflows, and take real-time action across complex financial operations – all within the boundaries of enterprise-grade control, compliance, and auditability.”
“MCP is driving a groundbreaking shift in how large language models interact with enterprise tools,“ commented Finbourne CEO and co-founder Tom McHugh. “This transformation unlocks enormous potential for secure human-AI collaboration, where agentic AI respects and operates within your existing controls, just like a trusted member of your team.”
Kaushik Shanadi of Helmet Security
MCP has appeal in finance, healthcare and other regulated sectors to support model auditability, explainability and security controls, according to Kaushik Shanadi, co-founder and chief technology officer of agentic AI security firm Helmet Security. The protocol is designed to aid discovery of available tools, understanding their functions, and sending or receiving structured requests and outputs.
“This enables ‘tool use’ capabilities,” he explains, “allowing AI to trigger workflows, retrieve information, or perform tasks through a unified interface.” Existing real-world examples include “a GitHub MCP server that lets AI interact with repositories, or a Blender MCP server that allows AI models like Claude to work inside 3D environments.”
For Wei Chen, director of global risk consulting at SAS, an understanding of MCP comes through the perspective of agentic AI and how autonomous, learning, decision-making agents are transforming risk functions.
MCP is not just a technical adapter – it is a governance boundary, as Steve Mansfield, founder and chief architect of Exocortical Concepts, sees it. Organizations are able to authoritatively define a model’s context.
“MCP provides structure and transparency in an area that would certainly seem to need both,” Mansfield says. “It is designed to make model capabilities explicit, tools discoverable, and interactions auditable.
Wei Chen of SAS
“Risk teams should gain clearer insight into how AI systems behave, what data they rely on, and what actions they are permitted to take,” Mansfield continues. “MCP should reduce uncertainty, which is one of the biggest barriers to responsible AI deployment in regulated industries, but really anywhere.”
Andrew Gamino-Cheong, co-founder and CTO of governance software company Trustible, suggests looking at MCP as an “API for AI,” in that it allows a system to call in different tools. In that way, for example, a Google Calendar MCP server would be a secure entry point for an event to be added and to connect to other AI.
“MCP provides flexibility by using the organization’s large language model (LLM) of choice instead of a specific LLM used by a tool vendor,” SAS’s Chen says. “Of course, the tools that can be used by MCP must provide effective APIs that can be configured to work with the LLM independently of any vendor. Given its tool-agnostic flexibility, MCP enables different analytical, data and technology tools to work together and interact with risk management experts.”
Implementation challenges, according to Nik Kale, a Cisco Systems principal engineer, include context overload (exposing too many tools can cause unpredictable model behavior), overly permissive servers (exposing capabilities beyond what is necessary), and audit gaps because MCP does not automatically feed the logs tracked by auditing systems.
A “false sense of safety” can lead to lax security reviews. MCP servers should be treated with the same rigor as API gateways.
Dominik Tomicevic of Memgraph
In view of the need for AI to interact with customer relationship management (CRM), ordering systems and other, often siloed data stores, “anything beyond simple retrieval gets harder,” observes Dominik Tomicevic, founder and CEO, Memgraph. “MCP offers a standard way for an LLM to query different data sources, but the LLM doesn’t understand your enterprise data, how you operate, your schema, how things are linked, or the implicit knowledge that isn’t documented.”
When a LLM accesses multiple external tools via the protocol, Tomicevic goes on, “there is a significant risk that it may choose the wrong tool, misuse the right tool, or become confused and produce nonsensical or irrelevant outputs – commonly referred to as hallucinations. Like the people in your organization, LLMs are vulnerable to choice overload and context ambiguity; more access to tools for both humans and machines can lead to more mistakes, not necessarily more good answers.”
Adoption doesn’t need to be large-scale from the start, says Helmet Security’s Shanadi. Many organizations begin by wrapping a single tool in MCP, testing the workflows, and gradually expanding as confidence and governance mature.