How prompt injection, data poisoning, and insecure output handling turn your AI deployment into an attacker’s playground, with code samples and exploitation techniques for each vulnerability class
This is exactly the kind of clarity we need. So much of the conversation around LLM security gets stuck in theoretical risks, but breaking down exactly how each OWASP Top 10 vulnerability manifests in production code makes the threat model tangible. The point about prompt injection being unpatchable because it's architectural rather than a bug needs to be shouted from the rooftops. Too many teams keep looking for a one line fix that doesn't exist.There is no real answer, just layered mitigations like input validation, output sanitization, and least privilege design. The section on excessive agency is what keeps me up at night. We're rushing to give AI systems root access to our infrastructure before we've solved the fundamental problem. I don't think people are aware of just how fundamentally stupid LLMs are. They'll obediently execute whatever instructions land in their context window, regardless of source. This is a serious issue, and I appreciate you bringing it to the forefront where it belongs.
Really appreciate this. OWASP 10 is really, really, important. and yes, fundamentally they have very narrow intelligence. We use the word 'reasoning' models, and it's absolutely not that.
I think so. If you look at MCP, it has 0 security. It’s great, but they rushed it out the door with security as an afterthought. We will need to play catch up.
TEEs solve a real problem! Running sensitive code on untrusted hardware. But they're oversold IMO. Side-channel attacks plague them, you're trusting chip vendors blindly, and they're terrible as 'general' sandboxes. Useful for specific cases, but overhyped. I'd love to be wrong tho.
This is exactly the kind of clarity we need. So much of the conversation around LLM security gets stuck in theoretical risks, but breaking down exactly how each OWASP Top 10 vulnerability manifests in production code makes the threat model tangible. The point about prompt injection being unpatchable because it's architectural rather than a bug needs to be shouted from the rooftops. Too many teams keep looking for a one line fix that doesn't exist.There is no real answer, just layered mitigations like input validation, output sanitization, and least privilege design. The section on excessive agency is what keeps me up at night. We're rushing to give AI systems root access to our infrastructure before we've solved the fundamental problem. I don't think people are aware of just how fundamentally stupid LLMs are. They'll obediently execute whatever instructions land in their context window, regardless of source. This is a serious issue, and I appreciate you bringing it to the forefront where it belongs.
Really appreciate this. OWASP 10 is really, really, important. and yes, fundamentally they have very narrow intelligence. We use the word 'reasoning' models, and it's absolutely not that.
Thanks, I see a new technical security standard about to be born😎
I think so. If you look at MCP, it has 0 security. It’s great, but they rushed it out the door with security as an afterthought. We will need to play catch up.
What’s your thoughts on TEE’s trusted environments in terms of security a de facto sandbox or nah?
TEEs solve a real problem! Running sensitive code on untrusted hardware. But they're oversold IMO. Side-channel attacks plague them, you're trusting chip vendors blindly, and they're terrible as 'general' sandboxes. Useful for specific cases, but overhyped. I'd love to be wrong tho.
Agreed. The side-channel attack is real. Use case window is very narrow. Agentic AI on the blockchain are using TEE’s.
It's better than nothing I suppose hah.