This is absolute insanity on the behalf of these organisations. A lot of this could be easily addressed with clear governance, proper training, and SLMs with proper guardrails. But I guess none of that is particularly revolutionary so is unlikely to be taken up. Thanks again for highlighting such a serious issue with such great clarity. 🙏
This is a sign that organizations are slow to keep up with employee needs. In the same way infosec needs to be a partner to be effective, AI governance need to empower people, not show them down.
That's the hard part, especially when companies are not setup to evolve as quickly as the environments where the operate.
Does anyone else see parallels between security control implementation and AI governance?
I know tier 1 tech companies can be expected to move fast, and they have.
i think the first step is companies need to send out training and establish a AUP asap. then they need to find a low friction solution. expecting 0 ai will result in people using their phones for it.
hopefully people can have some good in governance that provides that solution. i expect people will use it regardless.
💯. i’ve even seen stubborn employees who “really like” their ai, not use the provided one. an interesting BYOAI use case..
like i understand because i’ve used great IDEs at home and had to use really bad one for work, but the company was giving him enterprise access to a top tier model. fun times!
Yikes! When I worked for an agency, the rule was very clear: putting client information of any kind into an unapproved AI tool = fired! We were covered by some very strict NDAs.
I mean, this is what happens when you roll out locked down CoPilot and think you’re “doing AI” when your workers can pay for the latest models. It reminds me of the BYOD trend maybe a decade ago? We ended up with MDM and a lot of firms ended BYOD so they got back more control 🤷♂️
multimodal is for sure another vector. also people are wearing llm tools in necklace form. still sent remote, and includes all conversation all day long!
What a fascinating time of disruption we live in! I'm willing to bet that more than half of the employees involved here didn't bother reading the NDA. They just wanted a job and by the time they got to that point, they'd sign whatever you put in front of them to gain a check. Option one is a far better solution going forward. What employees are doing will destroy monopolies, and I don't view that as a bad thing. Can't wait to see where this all leads a few years from now.
This is crazy! Most cases it’s the leaders who are at fault for not giving proper toolt, governence and guidelines to their teams. 🦩 Thank you for sharing!🩷
that’s it! a simple view in powerful tools are here and leaders are having too much friction in their workplace. so employees hop on their personal chatbot and boom - data gone.
This is sharp, uncomfortable, and directionally correct. What really lands is the reframing: this isn’t a tooling failure, it’s a human-behavior + zero-friction problem. Ctrl+V didn’t “bypass” security. It was the security model, whether leaders admitted it or not.
The part that should scare executives most isn’t even the breach cost. It’s the quiet subsidy. Organizations are unknowingly funding competitors’ R&D because they failed to give employees a sanctioned, governed way to work with AI at the speed they’re already moving. That’s not malice. That’s predictable system design failure.
If there’s a next chapter, it’s this: governance has to move from policy to practice. Private models, clear guardrails, and incentives aligned with how people actually work. Otherwise this “voluntary exfiltration program” keeps running, fully staffed, no approval required.
Appreciate it deep Mark! I wonder if companies will either go for iron clad SaaS terms of service or local on prem llms. Either way the friction will determine the decision.
"Sixty-three percent of organizations don’t even have an AI governance policy."
This is a mind-blowing number. For how long are they going to ignore it? Of course, the policy itself won’t change anything, but it is the first and crucial step to solving this issue.
yeah, it’s always interesting when we get the end of year reports. more than i thought but also not really surprising. people really do lower their guard on these tools.
This is absolute insanity on the behalf of these organisations. A lot of this could be easily addressed with clear governance, proper training, and SLMs with proper guardrails. But I guess none of that is particularly revolutionary so is unlikely to be taken up. Thanks again for highlighting such a serious issue with such great clarity. 🙏
agreed! what caught me off guard was how many cyber incidents posted in major reports were directly just employees using chatbots!
they are basically a new attack surface where no one is taking them seriously. appreciate it Sam!
This is a sign that organizations are slow to keep up with employee needs. In the same way infosec needs to be a partner to be effective, AI governance need to empower people, not show them down.
That's the hard part, especially when companies are not setup to evolve as quickly as the environments where the operate.
Does anyone else see parallels between security control implementation and AI governance?
I know tier 1 tech companies can be expected to move fast, and they have.
i think the first step is companies need to send out training and establish a AUP asap. then they need to find a low friction solution. expecting 0 ai will result in people using their phones for it.
hopefully people can have some good in governance that provides that solution. i expect people will use it regardless.
“Shadow AI” is definitely a real problem, although I think it’s often self-inflicted.
Employees typically BYOAI when companies insist on flaky homegrown tools or outdated models while increasing workloads “because AI.”
Giving people actually useful AI tools alongside stricter enforcement could help a lot.🤔
that sounds very reasonable. i would fully support. and to think ~60% of companies don’t even have a policy on usage!
That’s just bonkers! 🤪
it really is haha!
💯. i’ve even seen stubborn employees who “really like” their ai, not use the provided one. an interesting BYOAI use case..
like i understand because i’ve used great IDEs at home and had to use really bad one for work, but the company was giving him enterprise access to a top tier model. fun times!
Yikes! When I worked for an agency, the rule was very clear: putting client information of any kind into an unapproved AI tool = fired! We were covered by some very strict NDAs.
I mean, this is what happens when you roll out locked down CoPilot and think you’re “doing AI” when your workers can pay for the latest models. It reminds me of the BYOD trend maybe a decade ago? We ended up with MDM and a lot of firms ended BYOD so they got back more control 🤷♂️
it’s exactly looks byod great call out. byocb? (bring your own chat bot)🤖
i think one area teams are probably missing is voice - it’s even worse.
i know many who chat back and forth with their personal paid AI of choice to get help with presentations, tough people management issues
multimodal is for sure another vector. also people are wearing llm tools in necklace form. still sent remote, and includes all conversation all day long!
What a fascinating time of disruption we live in! I'm willing to bet that more than half of the employees involved here didn't bother reading the NDA. They just wanted a job and by the time they got to that point, they'd sign whatever you put in front of them to gain a check. Option one is a far better solution going forward. What employees are doing will destroy monopolies, and I don't view that as a bad thing. Can't wait to see where this all leads a few years from now.
100%! it’s a symptom of companies having friction around the tool for corporate use that is freely available.
if it’s friday and you need a report done in an hour, are most people going to spend time late in the office or hop on chatgpt lol.
This is crazy! Most cases it’s the leaders who are at fault for not giving proper toolt, governence and guidelines to their teams. 🦩 Thank you for sharing!🩷
that’s it! a simple view in powerful tools are here and leaders are having too much friction in their workplace. so employees hop on their personal chatbot and boom - data gone.
This is sharp, uncomfortable, and directionally correct. What really lands is the reframing: this isn’t a tooling failure, it’s a human-behavior + zero-friction problem. Ctrl+V didn’t “bypass” security. It was the security model, whether leaders admitted it or not.
The part that should scare executives most isn’t even the breach cost. It’s the quiet subsidy. Organizations are unknowingly funding competitors’ R&D because they failed to give employees a sanctioned, governed way to work with AI at the speed they’re already moving. That’s not malice. That’s predictable system design failure.
If there’s a next chapter, it’s this: governance has to move from policy to practice. Private models, clear guardrails, and incentives aligned with how people actually work. Otherwise this “voluntary exfiltration program” keeps running, fully staffed, no approval required.
Appreciate it deep Mark! I wonder if companies will either go for iron clad SaaS terms of service or local on prem llms. Either way the friction will determine the decision.
"Sixty-three percent of organizations don’t even have an AI governance policy."
This is a mind-blowing number. For how long are they going to ignore it? Of course, the policy itself won’t change anything, but it is the first and crucial step to solving this issue.
yeah, the stats are from a few end of year papers. i love to read them, and this is really the first time they have released the ai inclusions. 63%!!
Those dollar amounts are nasty!
But no worry, it will be fine, the costs will get passed to the end consumer.
that’s true for sure lol. and yeah the 1/5 and cost in $ is pretty staggering. and we are still early lol.
appreciate it :)
yep! there were a few end of year reports i pulled the numbers from.
yeah, it’s always interesting when we get the end of year reports. more than i thought but also not really surprising. people really do lower their guard on these tools.