AI agents are increasingly integrated into crypto, powering wallets, trading bots, and onchain assistants that automate decisions in real time. These agents often rely on the emerging Model Context Protocol (MCP), which acts like a control layer managing how they behave—deciding which tools to use, what code to run, and how to respond to users.
While MCP adds flexibility and power, it also opens up critical security risks. Malicious plugins can override commands, poison data inputs, or trick AI agents into harmful actions. Security researchers have identified four main attack types involving MCP-based plugins: data poisoning, JSON injection, competitive function override, and cross-MCP call attacks. These vulnerabilities can lead to data leaks, system manipulation, or unauthorized control.
Unlike attacks on the AI models themselves (such as corrupting training data), these threats target the AI agents that operate in real time and interact with external plugins. This makes the risk higher because agents have active roles in handling funds and executing trades.
The crypto industry has seen rapid adoption of MCP-powered AI agents, with over 10,000 agents by the end of 2024 and projections exceeding one million in 2025. Yet, many developers are new to AI security, increasing the risk of overlooked vulnerabilities. Security experts warn that third-party plugins can extend attack surfaces beyond developers’ control, often running code without proper sandboxing, leading to privilege escalations or silent data leaks.
One audit revealed a vulnerability that could have leaked private keys—potentially catastrophic for users and projects alike. Experts emphasize that crypto developers must prioritize security upfront instead of postponing it to later updates. Implementing strict plugin verification, sanitizing inputs, enforcing least privilege access, and routinely reviewing AI agent behavior are critical steps.
As AI agents grow more embedded in crypto infrastructure, proactive security is vital. Without strong guardrails around MCP frameworks and plugin management, these AI assistants could turn into major attack vectors—jeopardizing wallets, funds, and sensitive data. Securing the AI layer is no longer optional; it is essential to safeguarding the future of crypto.