24 Comments
User's avatar
ToxSec's avatar

Feel free to ask Erich or myself any questions!

Erich Winkler's avatar

Feel free to ask me or Toxsex any questions! We’ll be happy to answer them!

Spherent's avatar

Really cool, scary to think how much data could be leaving my company like this.

ToxSec's avatar

i’ve seen it happen at many companies too. up to 60% of companies don’t even have policy established on it yet!

Erich Winkler's avatar

And another 30% of companies have an existing policy that is only a piece of paper that nobody understands..

Erich Winkler's avatar

Plenty of data, I can tell you that! This really isn’t easy to handle.

Mahdi Assan's avatar

Really great content here!

Erich Winkler's avatar

Thank you! We appreciate it!

Sam Illingworth's avatar

Brilliant collaboration team! Do you think that training against shadow AI will go the way of phishing attacks, i.e. employees get box-ticking exercises on how to spot it to relinquish institutional blame?

ToxSec's avatar

would be a great idea. the amount of people i’ve seen getting tickets for using competitor ai is silly… even with policy!

Erich Winkler's avatar

Thank you, Sam! We appreciate it!

Erich Winkler's avatar

I doubt it will work. As long as the tool is more convenient than the "safe" tools that the company offers to employees, people will always find a way..

Mohib Ur Rehman's avatar

2 of my favorite creators collabing!

Amazing work, guys.

ToxSec's avatar

thanks a ton Mohib!! 🔥

Erich Winkler's avatar

Thanks! We appreciate it!

The AI Architect's avatar

Strong framing on the context leak problem. Most Shadow IT discussions focus on data exfiltration, but the relational dimension gets overlooked. When someone pastes a customer email thread into ChatGPT to draft a reply, they're not just exposing names and addresses they're exposing negotiation history, pain points, internal escalation patterns. That cumulative context is way harder to audit or claw back than a staic file download.

ToxSec's avatar

Erich really nailed it here!

Erich Winkler's avatar

Absolutely! That was the point I was trying to make here! :)

ToxSec's avatar

🔥🔥

Michael Janzen's avatar

Shadow AI may be on of the strongest arguments for augmenting employees with AI. They are discovering these tools naturally, the momentum is increasing, so instead of worrying about risk, take action and implement approved tools for people to use. The slight initial dip (implementation cost and trading) of the J-curve is quickly offset by the boost in productivity and reduced risk of shadow AI.

ToxSec's avatar

fully agree. imo, you give them the tools you need, with ironclad ToS with your LLM SaaS provider, or host a solution on prem. you need to make the friction low, or you can hand your data to OpenAI.

Erich Winkler's avatar

That was exactly the point I was trying to make here! :)

Erich Winkler's avatar

I agree with the core point here. Shadow AI is one of the clearest signals that employees want AI augmentation and are already discovering value on their own.

That’s exactly why focusing only on risk misses the opportunity. When organizations provide approved, usable AI tools, they don’t just reduce Shadow AI, they align productivity and security instead of forcing a trade-off!