Enterprise adoption of MCP outpaces security controls

AI agents now bring more access and more connections to enterprise systems than any other software in the environment. This makes them a bigger attack surface than anything security teams have managed before, and the industry doesn’t yet have a framework for it. "If this attack vector is exploited, it can lead to data corruption or even worse," said Spiros Xanthos, founder and CEO of Resolve AI, speaking at a recent VentureBeat AI Impact Series event. Traditional security frameworks are built around human interactions. There is still no agreed upon construct for AI agents that have personas and can operate autonomously, John Aniano, senior vice president of product and CRM applications at Zendesk, noted at the same event. Agentic AI is moving faster than enterprises can build guardrails—and the Model Context Protocol (MCP), while reducing integration complexity, is exacerbating the problem. Agentic AI is moving faster than enterprises can build guardrails around them, according to Aniano and other enterprise executives. And the Model Context Protocol (MCP), while reducing integration complexity, doesn’t help. « It’s an unsolved problem right now because it’s the wild, wild West, » Aniano said. « We don’t even have a defined technical protocol between agents that all companies agree on. How do you balance user expectations against what keeps your platform safe? »

MCP still "exceptionally permissive"

Enterprises are increasingly connecting to MCP servers because they simplify integration between agents, tools, and data. However, MCP servers tend to be « extremely permissive, » he said. They’re « actually probably worse than APIs, » he argues, because APIs at least have more controls to impose on agents. Today’s agents act on behalf of humans based on explicit permissions, thus establishing human responsibility. "But in the future you may have dozens, hundreds of agents with their own identity, their own access," said Xanthos. "It becomes a very complex matrix."

Although his startup is developing autonomous AI agents for site reliability engineering (SRE) and system management, he admitted that the industry « completely lacks the framework » for autonomous agents. « It’s entirely up to us and whoever creates the agents to figure out what limits to give them, » he said. And customers need to be able to trust these solutions. Some existing security tools do offer granular access — Splunk, for example, has developed a method to grant access to specific indexes in major data stores, he noted — but most are broader and people-oriented. « We’re trying to figure this out with existing tools, » he said. "But I don’t think they are enough for the age of agents.

Who is responsible when AI incorrectly authenticates a user?

At Zendesk and other customer relationship management (CRM) platform providers, AI is involved in a number of user interactions, Aniano noted — in fact, it’s now at a « volume and scale that we haven’t considered as a business and as a society. »

It can get tricky when AI assists human agents; the audit trail can become a maze. « So now you have a human talking to a human talking to an AI, » Aniano noted. « The human tells the AI ​​to take action. Who is to blame if the action is wrong? » This gets even more complicated when there are “multiple pieces of AI and multiple people" in the mix. To prevent agents from going off the rails, Zendesk tends to be « very strict » about access and scope; however, customers can define their own security fences based on their needs. In most cases, AIs have access to knowledge sources, but they don’t write code or execute commands on servers, Aniano said. If the AI ​​calls the API, it is « declaratively designed » and sanctioned, and the actions are called specifically. However, customer demand is flooding those scenarios and « we’re kind of holding the doors down right now, » he said. The industry needs to develop specific standards for interacting with agents. « We’re entering a world where with things like MCPs that can automatically detect tools, we’re going to have to create new safety methods to decide what tools these bots can interact with, » Aniano said. When it comes to security, enterprises are legitimately concerned when AI takes over authentication tasks, such as sending and processing one-time passwords (OTPs), SMS codes or other two-step verification methods, he said. What happens if AI incorrectly authenticates or identifies someone? This can lead to the leakage of sensitive data or open a door to attackers. « Now there’s a spectrum, and the end of that spectrum today is human, » Aniano said. However, « the end of that spectrum tomorrow could be a specialized agent designed to do the same kind of human-level sensing or interaction. » Customers themselves are on a spectrum of acceptance and comfort. In some companies — especially financial services or other highly regulated environments — people still need to participate in authentication, Aniano noted. In other cases, legacy companies or the old guard only trust people to authenticate other people. He noted that Zendesk is experimenting with new AI agents that are « a little more connected to the systems » and work with a select group of customers around the railing.

A permanent permit is pending

In some future, agents may actually be trusted more than humans to perform some tasks, and given permissions « far beyond » what humans have today, Xanthos said. But we are far from that, and for the most part, the fear of something going wrong is what holds businesses back. « Which is a good fear, right? I’m not saying it’s a bad thing, » he said. Many businesses are simply not yet comfortable with an agent performing all the steps of the workflow or completely closing the loop on their own. They still want a human review. Resolve AI is on the verge of giving agents permanent permission in a few « generally safe » cases, such as coding; from there, they’ll move on to more open scenarios that aren’t as risky, Xanthos explained. But he acknowledged that there will always be many risky situations where AI mistakes can « mutate the state of the production system, » as he put it. Ultimately, though: « There’s no going back, obviously; this is moving faster than maybe even mobile devices. So the question is, what do we do about it? »

What security teams can do now

Both speakers referred to interim measures available under existing instruments. Xanthos noted that some tools—Splunk among them—already offer fine-grained index-level access controls that can be applied to agents. Aniano described Zendesk’s approach as a practical starting point: declaratively designed API calls with explicitly sanctioned actions, strict access and scope restrictions, and human review before extending agent permissions.

The basic principle, as Aniano said: "We’re always checking those gates and seeing how we can widen the opening" — meaning don’t grant permanent permission until you’ve verified each extension.

Security,Orchestration

#Enterprise #adoption #MCP #outpaces #security #controls

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *