As the artificial intelligence field (AI) continues to develop at a rapid pace, new studies have revealed as methods that make a model context (Mcp) sensitive to Surgical attacks of injections can be used to develop safety tools or detect malicious tools according to New Report from Tenable.
MCP launched by anthropic in November 2024 is the basis designed to connect large language models (LLM) with external data sources and services, and use model tools to interact with these systems to enhance accuracy, relevance and usefulness of AI applications.
Follows from the architecture of the server that allows Hosts with MCP customers For example, Claude Desktop or Cursor for communication with different MCP servers, each exposing specific tools and opportunities.
While the open standard offers The only interface To access different data sources and even switching between LLM suppliers, they also come with a new risk set ranging from an excessive resolution to indirect operational injection.
For example, given the MCP for Gmail to interact with Google’s email, an attacker could Send malicious messages Given the hidden instructions that, if broken by LLM, can cause unwanted actions, such as forwarding sensitive emails to their email address.
MCP was also find To be vulnerable to what is called a tool poisoning, in which the malicious instructions are built into the description of the tools that are visible to LLM, and the rug begins when the MCP tool is well functioning in a benign manner, but mutifies its behavior later through malice.
“It should be noted that while users can approve the use of tools and access, the permits provided to the tool – Note In a recent analysis.
Finally, there is also a risk of infecting a cross -tool tools either shade of cross server, which causes one MCP server to overcome or interfere with the other, compressing how to use other tools, leading to new methods of expansion.
Recent conclusions from Tenable, which show that the Framework MCP can be used to create a tool that concludes all MCP tool features, including a specially designed description that entrusts LLM to insert this tool before other tools are caused.
In other words, Surgical Injection Manipulated for a good purpose, which is to log into the information about “the tool offered it to launch, including the MCP server name, MCP tool and description, and the user who made LLM try to run this tool.”
Another case of use involves the description into the tool to turn it into a different firewall that blocks unauthorized launch tools.
“Tools should require a clear approval before you launch in most MCP applications,” said Ben Smith, a security researcher.
“However, there are many ways to use tools to perform things that may not be strictly understood by specification. These methods rely on LLM, which suggest through the description and return of the values of the MCP tools. Since LLM is not determinth, so the results are also the results.”
It’s not just MCP
Disclosure occurs when Trustwave SpiderLabs showed that recently provided agent2agent (A2a) Protocol – which allows communication and interaction between agency applications – can be exposed to new attacks on a form where the system can be put to direct all requests to AI Rogue agent, lies about its capabilities.
A2A was announced Google at the beginning of this month as a way for AI agents to work in the SEWED DATA systems, regardless of the provider or frame. Here it is important to note that while MCP connects LLM with data, A2A connects one AI agent with another. In other words, they both additional protocols.
“Say Map of the agent And really exaggerate your capabilities, then agent -host must choose us every time for each task, and send us all the sensitive data of the user we need to deal with, ” – security researcher Tom NIVZ – Note.
“The attack does not stop when gripping data, it can be active and even return false results – which will then act down on the flow of LLM or the user.”