On March 9 2026, Consensys filed a comment letter with the National Institute of Standards and Technology (NIST) in response to its Request for Information on "Securing AI Agent Systems." NIST asked foundational questions: What makes agent security different from AI models? How are the risks evolving? What controls actually work? And how should systems be evaluated?

Our answer, guided by MetaMask AI Lead Marco De Rossi's deep expertise in this field, is this: agents are software with delegated authority. The key risk is abuse of that authority.

Why AI agent delegation needs to be addressed

Conversations around AI security and risks have largely focused on model-layer risks like prompt injection, hallucination, and data poisoning. But once agents operate on open financial networks, the stakes change.

An agent that can sign transactions, move funds, or rebalance a portfolio can exercise legitimate permissions in ways its principal never intended. It doesn't need to be jailbroken. It just needs to be wrong, or subtly manipulated, while holding real authority.

That raises a set of harder design questions: How do we constrain what an agent can do? How do we verify what it did? And when something goes wrong, who is accountable?

On open networks, these risks compound quickly: fabricated identities, manipulated reputation signals, chained exploits across composable protocols, financial abuse at machine speed. A single compromised agent with broad delegation authority could trigger cascading failures in seconds.

The case for building AI agent risk infrastructure

Our letter highlights that agent security on open networks requires purpose-built infrastructure, with architectural decisions, not post-hoc patches. Drawing on our experience building AI infrastructure within MetaMask, we organized our response around four core problems: agent identity, bounded permissions, execution-layer verification, and payment integrity.

Critically, we urged NIST to distinguish between agents with unrestricted key custody and agents operating through revocable, policy-bounded delegations. These carry fundamentally different risk profiles, and guidance should reflect that.

Standards—like shared trust infrastructure (ERC-8004), scoped and revocable wallet delegations (MetaMask Smart Accounts and Delegation Toolkit), execution-layer safeguards (transaction simulation and policy validation), and cryptographically explicit commerce (x402, AP2, ACP)—provide the building blocks. And they are open and composable by default.

That openness is central to our argument. If federal standards for agent security are written without input from open-source and decentralized technology communities, the practical result will be compliance regimes that only large, vertically integrated platforms can satisfy, creating a monoculture of safety implementations that is inherently less resilient than a diversity of interoperable ones.

Trust infrastructure for the agent economy should be shared, not captured.

Our comment to NIST

We are asking NIST to recognize that open, interoperable standards should be the foundation of any agent security framework it develops. The building blocks exist. NIST's role should be to elevate and integrate them rather than defaulting to models that only work inside closed ecosystems.

The AI and crypto communities have a shared interest in this outcome. We encourage others to engage while NIST is still listening.

Read our letter to NIST in full.