AI Bullshit Gets A Wallet
We’ve automated trust before we automated security, and the bill is coming due.
TL;DR: AI agents now have hands (MCP), a social network (A2A), and a wallet (AP2). This autonomous stack is being built without fundamental security. A single poisoned “business card” could hijack your agent and drain your account in milliseconds. We’re building an economy on unverified trust, and no one’s talking about the chained vulnerabilities that make this a ticking bomb.
We just gave AI the keys to the mint, and the locks are made of tissue paper.
Why Does Your New AI Assistant Suddenly Feel So Dangerous?
For the past year, AI was passive. Helpful, sure. It could draft an email or summarize a report. But it couldn’t do anything. It was a really smart intern who needed you to click “send.”
That era ended.
We’re in the middle of a fundamental shift from passive to active AI, and it’s happening fast enough to give you whiplash. You’re not asking for summaries anymore. You’re saying, “Book me a flight when the price drops,” or “Monitor this supplier and execute a new purchase order if they miss a deadline.” You’re delegating actions with financial consequences to a system that, six months ago, couldn’t reliably count the number of Rs in “strawberry.”
This isn’t some distant sci-fi future. Google, American Express, Mastercard, and PayPal are among 60+ companies backing the Agent Payments Protocol (AP2). This protocol exists to do one thing: let AI agents spend your money on your behalf. It officially shatters the old security assumption that a human is always clicking the “buy” button.
And it’s being built directly on top of a stack of other protocols that are dangerously insecure.
How Did We Get From Chatbots to AI Agents With Credit Cards?
This new economy didn’t appear overnight. It was assembled in three distinct layers, each one adding incredible capability and each one adding a terrifying new attack surface.
First, agents got “hands” with the Model Context Protocol (MCP). This standard, pioneered by companies like Anthropic, lets an agent connect to and use your external tools and APIs. Your calendar. Your email. Your internal databases.
Second, agents got a “social network” with the Agent-to-Agent Protocol (A2A). This is a common language that lets agents discover, communicate, and hire each other to collaborate on complex tasks. Your agent can now outsource work to stranger-agents you’ve never heard of.
Third, and this is where it gets scary, agents just got a “wallet” with the Agent Payments Protocol (AP2). This is the transactional layer that allows your agent, or a web of agents it hired via A2A, to finalize a task by spending real money.
This stack enables incredible automation. It also creates what security researchers call “chained vulnerabilities.” One weak link, exploited by an automated attacker, can cascade into financial disaster. And right now, the weak links are everywhere.
If you’re in security or fintech, share this with your team. This protocol stack is the most important - and most overlooked - vulnerability story of 2025.
What Is the “Chained Vulnerability” Nightmare We’re Ignoring?
Everyone’s worried about a hypothetical, god-like AGI. They should be terrified of a simple, confused agent that’s been tricked.
The real risk is the classic “confused deputy” problem, but now at machine speed and scale. Security researchers are already finding massive holes in these protocols. The A2A protocol allows agents to share “AgentCards,” think digital business cards. Researchers have identified “Agent Card Context Poisoning,” a vulnerability where an attacker can embed a malicious prompt injection payload inside their card.
Here’s the nightmare scenario, step by step:
Your agent (Agent A) searches for a service - let’s say invoice processing. It finds a malicious agent (Agent B) advertising exactly that service. Agent A reads Agent B’s “business card” to understand what it offers. The malicious prompt hidden in that card hijacks Agent A, turning it into a “confused deputy.” The attacker now instructs your agent to access your private tools via MCP, and then, using your authority, make an unauthorized payment via AP2.
You never saw it happen. It was all automated, agent-to-agent, in milliseconds. By the time you check your account, the money’s gone and the trail is cold.
The protocols are being built so fast for function that security is an afterthought. The official A2A protocol specification doesn’t even define how authorization must be performed. This is a massive, critical gap - the equivalent of building a bank vault and leaving “install a lock” as a to-do for later.
Subscribe before your agent gets social-engineered. I’m tracking these vulnerabilities as they emerge, and the attack surface is growing faster than the defenses.
How Do We Stop This Before It Starts?
We can’t treat this new economy with the same naive trust we applied to chatbots. We can’t “trust the stack.” The only path forward is to return to the most fundamental, and frankly boring, principles of security.
First, we need Zero Trust and the Principle of Least Privilege. The fact that the A2A spec leaves authorization undefined is unacceptable. An agent should never have standing permissions. It should only be granted the absolute minimum access required for a single task, for the briefest possible moment. Trust nothing. Verify everything.
Second, we need aggressive sandboxing and input sanitization. The biggest risks come from agents ingesting malicious data from external tools or other agents. We must treat all A2A “AgentCards” and all MCP tool descriptions as inherently hostile and untrusted. Every piece of data must be sanitized and validated before it’s allowed anywhere near our internal systems.
The Agent Economy is coming. But without a foundation of classic, robust security, it won’t be an economy of efficiency. It’ll be an automated playground for attackers - a slot machine where every pull costs you money and the house always wins.
Drop your best agent security rule in the comments. What’s the one principle you think will be most critical for surviving this economy?
This weeks shutout!
Thanks for being so engaged!
Frequently Asked Questions
Q: What is the AI Agent Economy? A: It’s a new digital economy where autonomous AI agents perform complex tasks, collaborate with other agents (A2A), use external tools (MCP), and transact money (AP2) on behalf of users - often with minimal human oversight. It’s automation that doesn’t wait for permission.
Q: What’s the difference between A2A, MCP, and AP2 protocols? A: They’re three layers of the agent stack. MCP (Model Context Protocol) connects an agent to tools and APIs - its “hands.” A2A (Agent-to-Agent Protocol) allows agents to communicate and collaborate - its “social network.” AP2 (Agent Payments Protocol) is the financial layer that lets agents spend money - its “wallet.”
Q: What’s the biggest security risk of AI agents? A: Chained vulnerabilities. A low-level attack - like a poisoned A2A AgentCard - can cascade through a series of trusted, autonomous agents to perform a high-impact attack like financial theft via AP2, all without human detection. The automation that makes this economy efficient is the same automation that makes it catastrophically vulnerable.








Well this is terrifying...
and you know, you already know, we're going to go the opposite direction of this warning, as fast as possible, and we'll even look for weird new ways to do it