Brilliant collaboration team! Do you think that training against shadow AI will go the way of phishing attacks, i.e. employees get box-ticking exercises on how to spot it to relinquish institutional blame?
I doubt it will work. As long as the tool is more convenient than the "safe" tools that the company offers to employees, people will always find a way..
Strong framing on the context leak problem. Most Shadow IT discussions focus on data exfiltration, but the relational dimension gets overlooked. When someone pastes a customer email thread into ChatGPT to draft a reply, they're not just exposing names and addresses they're exposing negotiation history, pain points, internal escalation patterns. That cumulative context is way harder to audit or claw back than a staic file download.
Shadow AI may be on of the strongest arguments for augmenting employees with AI. They are discovering these tools naturally, the momentum is increasing, so instead of worrying about risk, take action and implement approved tools for people to use. The slight initial dip (implementation cost and trading) of the J-curve is quickly offset by the boost in productivity and reduced risk of shadow AI.
fully agree. imo, you give them the tools you need, with ironclad ToS with your LLM SaaS provider, or host a solution on prem. you need to make the friction low, or you can hand your data to OpenAI.
I agree with the core point here. Shadow AI is one of the clearest signals that employees want AI augmentation and are already discovering value on their own.
That’s exactly why focusing only on risk misses the opportunity. When organizations provide approved, usable AI tools, they don’t just reduce Shadow AI, they align productivity and security instead of forcing a trade-off!
Feel free to ask Erich or myself any questions!
Feel free to ask me or Toxsex any questions! We’ll be happy to answer them!
Really cool, scary to think how much data could be leaving my company like this.
i’ve seen it happen at many companies too. up to 60% of companies don’t even have policy established on it yet!
And another 30% of companies have an existing policy that is only a piece of paper that nobody understands..
Plenty of data, I can tell you that! This really isn’t easy to handle.
Really great content here!
Thank you! We appreciate it!
Brilliant collaboration team! Do you think that training against shadow AI will go the way of phishing attacks, i.e. employees get box-ticking exercises on how to spot it to relinquish institutional blame?
would be a great idea. the amount of people i’ve seen getting tickets for using competitor ai is silly… even with policy!
Thank you, Sam! We appreciate it!
I doubt it will work. As long as the tool is more convenient than the "safe" tools that the company offers to employees, people will always find a way..
2 of my favorite creators collabing!
Amazing work, guys.
thanks a ton Mohib!! 🔥
Thanks! We appreciate it!
Strong framing on the context leak problem. Most Shadow IT discussions focus on data exfiltration, but the relational dimension gets overlooked. When someone pastes a customer email thread into ChatGPT to draft a reply, they're not just exposing names and addresses they're exposing negotiation history, pain points, internal escalation patterns. That cumulative context is way harder to audit or claw back than a staic file download.
Erich really nailed it here!
Absolutely! That was the point I was trying to make here! :)
🔥🔥
Shadow AI may be on of the strongest arguments for augmenting employees with AI. They are discovering these tools naturally, the momentum is increasing, so instead of worrying about risk, take action and implement approved tools for people to use. The slight initial dip (implementation cost and trading) of the J-curve is quickly offset by the boost in productivity and reduced risk of shadow AI.
fully agree. imo, you give them the tools you need, with ironclad ToS with your LLM SaaS provider, or host a solution on prem. you need to make the friction low, or you can hand your data to OpenAI.
That was exactly the point I was trying to make here! :)
I agree with the core point here. Shadow AI is one of the clearest signals that employees want AI augmentation and are already discovering value on their own.
That’s exactly why focusing only on risk misses the opportunity. When organizations provide approved, usable AI tools, they don’t just reduce Shadow AI, they align productivity and security instead of forcing a trade-off!
💯