Subscribe
Sign in
Home
Notes
Chat
Deep Dive
Archive
About
1:00
Darknet Chatbots in Action: Jailbroken AI Demo over Tor
Frontier model with every safety rail stripped serves synthesis recipes, phishing kits and zero-day chains through a clean Whonix-Tor stack — corporate…
15 hrs ago
•
ToxSec
20
6
8
One Magic String from Anthropic Silences Claude (RAG DoS Exposed)
A documented QA test string becomes a sticky DoS primitive through prompt injection, RAG poisoning, and context persistence
Feb 24
•
ToxSec
29
18
10
Dark LLMs, Voice Clones, and Agentic Browsers
Darknet jailbroken chatbots are serving uncensored frontier models over Tor, voice clone scams just crossed the indistinguishable threshold.
Feb 21
•
ToxSec
36
26
16
Watch Me Poison Your MCP
How MCP tool poisoning hijacks agent inference through description metadata, conversation-formatted JSON spoofs safety training.
Feb 18
•
ToxSec
27
5
11
When Your Notepad App Gets a CVE: AI Security Is Everybody’s Problem Now
Episode 2 recap — ToxSec x Exploring ChatGPT live stream
Feb 15
•
ToxSec
25
5
5
1:06:33
Latest
Top
Discussions
AI & Cybersecurity
A recording from ToxSec and Exploring ChatGPT's live video
Feb 11
•
ToxSec
and
Exploring ChatGPT
27
30
9
57:55
F*ck Your Guardrails: Live Fire Prompt Injection
Four attack chains to hit system prompt theft, remote code execution, SSRF through agent tools, and weapons content bypass. Step by step with the exact…
Feb 9
•
ToxSec
45
39
21
Molt Road and the Rise of AI Agent Black Markets
How autonomous agents got their own darknet marketplace before the social network’s database was even secured--plus weaponized skills, stolen…
Feb 5
•
ToxSec
39
27
17
2:01
OpenClaw and Moltbook: The Viral AI Agent and Security Nightmare 🦀
How self-hosted AI assistants with shell access, plaintext credentials, and persistent memory created the lethal trifecta--plus the bots built their own…
Feb 2
•
ToxSec
42
31
15
1:35
PSA:OpenClaw Is Wildly Insecure
How open-source AI agents expose API keys, enable RCE via prompt injection, and why your “local” butler is probably internet-facing right now
Jan 29
•
ToxSec
42
18
13
How DAN and Roleplay Prompts Bypass LLM Guardrails
How DAN prompts, roleplay exploits, and multi-turn manipulation bypass AI guardrails through instruction-data conflation, and why patching this is…
Jan 26
•
ToxSec
23
15
6
Shadow AI Is the New Shadow IT - Only Much Worse [Special Guest Post]
For years, security teams fought Shadow IT. Employees are installing tools without approval. Data is flowing outside visibility.
Jan 20
•
ToxSec
and
Erich Winkler
27
48
19
See all
ToxSec - AI and Cybersecurity
Security for a world run by machines that lie.
Subscribe
Recommendations
View all 15
AI Newsletter
elvis
Startup Riders
Ivan Landabaso
DARING NEXT
Dallas Payne
Nate’s Substack
Nate
AI News Digest
AI News Digest
ToxSec - AI and Cybersecurity
Subscribe
About
Archive
Recommendations
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts