Shadow AI Is the New Shadow IT - Only Much Worse [Special Guest Post]
For years, security teams fought Shadow IT. Employees are installing tools without approval. Data is flowing outside visibility.
Hey everyone. This week i’m handing the keys over to Erich Winkler. He’s a CISSP-certified security manager who’s been knee-deep in the actual mess of getting organizations to use AI without lighting their data on fire.
His piece covers shadow AI. The quiet catastrophe where your employees paste sensitive docs into ChatGPT because “it’s faster.” if you’ve ever wondered why your carefully architected data loss prevention strategy is now just a suggestion, Erich’s got answers. -Tox
It was a nightmare for many IT managers and cybersecurity teams.
Years spent designing corporate systems and locked-down networks, all to stop employees from installing unauthorized apps.
It was risky.
But over time, it became manageable.
Shadow IT refers to technology, systems, or software used inside an organization without the knowledge or approval of IT or security teams.
Then AI entered our lives.
And inevitably, employees started using it.
The result?
Many IT managers now look back at the days of Shadow IT with nostalgia.
Why am I writing this?
For weeks, I was trying to come up with a good topic for ToxSec’s newsletter.
I am a cybersecurity manager with CISSP certification, and extensive experience in both AI and cybersecurity.
Then it hit me.
For the past year, I’ve been dealing with AI usage inside our organization—specifically, how to use it securely and effectively.
I realized this was the perfect topic.
The combination of organizational security and AI.
This is my opportunity to explain how organizations actually struggle with AI adoption and the risks that come with it.
What Shadow AI Really Means
While shadow IT refers to the unmanaged use of various apps and tools, shadow AI refers to the unmanaged use of AI tools.
It’s about:
Employees pasting confidential data into AI tools
Teams are automating decisions with models no one audited
Business logic outsourced to black boxes you don’t control
Organizations spent years creating systems that sufficiently protect their data and prevent it from leaving the organization. And then, suddenly, convenient AI tools start popping up like mushrooms after a rain, and employees start to make their lives easier.
No procurement.
No data classification.
No logging.
No oversight.
Just “I needed to get work done faster.”
Why Shadow AI Is Worse Than Shadow IT
You could argue that it is just like the same.
Instead of using unmanaged applications, employees simply use unmanaged AI applications.
From my experience, there are two main reasons why it isn’t the same:
The amount of data leaving the company significantly increased.
It isn’t only data that’s leaving the company. It’s the context too.
When someone uploads:
customer contracts
internal emails
source code
security reports
…they’re not just storing data externally.
They’re training, prompting, and explaining the relationship between individual data and an external entity.
And good luck getting this genie back into the bottle.
Do you want to know more about how AI threats in corporate environment? I have an article you’ll find interesting!
AI didn’t take our privacy. We gave it away one prompt at a time. Can we take it back?
The Illusion of “It’s Just a Tool”
So, if we all know it is bad, why do we still have this problem?
The story is always the same.
If you give them a task as their employer, what do you think they care about more?
Getting it done easily, or the security of corporate data.
You can answer this question by yourself.
Don’t blame employees - It’s up to you to find a way
I know what you are thinking right now. I keep blaming employees for preferring convenience over security.
I want to make one thing clear.
Most people using Shadow AI aren’t malicious.
They aren’t careless.
They’re rational.
You gave them:
- a deadline
- a performance metric
- a problem to solve
And then you gave them access to a tool that solves it faster.
From their perspective, the choice is obvious.
Security teams often ask:
“Why didn’t they think about data protection?”
But that’s the wrong question.
The right one is:
“Why did our systems make the insecure choice the easiest one?”
Shadow AI thrives where:
- security adds friction
- processes are slow
- approved tools lag behind reality
If security only exists as a blocker, users will route around it.
They always have.
It’s your job as a security professional to create systems that balance security and convenience.
What Actually Works (Even If It’s Unpopular)
You won’t eliminate Shadow AI.
The goal is to contain and channel it.
Here is a specific list that organizations should employ to limit the risks related to shadow AI:
1. Explicitly allow AI usage (with boundaries)
The fastest way to create Shadow AI is to pretend AI doesn’t exist.
Instead of blanket bans, define:
What AI tools are approved
What types of data can be used
What use cases are explicitly allowed
If people don’t know what’s allowed, they’ll decide for themselves.
2. Classify data for AI use
Traditional data classification isn’t enough.
You need an AI-specific lens, for example:
Safe to use with external AI
Allowed only with internal/private models
Never allowed in AI tools (prompts included)
This helps employees make decisions before they paste anything.
P.S.: Ensure this information is easily accessible at all times.
3. Provide a safe, approved alternative 🚨🚨
This is probably the most important step.
You need to provide safe alternatives. Use internal models, and make those convenient tools accessible to your employees.
That is the most effective protection..
Approved tools must be:
easy to access
good enough to be useful
Security that slows people down will be bypassed
4. Train people on why, not just rules
Do you expect people to be responsible?
Well, treat them with respect and explain to them why.
Annual awareness slides won’t help.
People need to understand:
How context leaks through prompts
Why summaries are still sensitive
How small disclosures accumulate
Once they start to understand the basic concepts, their behavior will change.
Conclusion: Shadow AI Is a Signal, Not a Failure
Shadow AI isn’t a sign that employees are irresponsible.
It’s a signal that the organization hasn’t caught up with reality yet.
Just like Shadow IT before it, Shadow AI emerges where:
work needs to move faster,
tools evolve faster than policies,
and security is experienced only as friction.
Trying to ban it outright won’t work.
Pretending it doesn’t exist is worse.
And blaming employees misses the point entirely.
Shadow AI tells you something important:
People want to do better work, faster, and they will use whatever tools help them get there.
Your job as a security professional isn’t to stop that impulse.
It’s to shape it.
That means:
making the secure path the easiest one,
giving clear guidance instead of vague rules,
and accepting that control today is about influence, not prohibition.
Organizations that treat Shadow AI as a governance problem will struggle.
Organizations that treat it as a design problem will adapt.
Shadow AI isn’t the enemy.
It’s feedback.
And how you respond to it will define what security looks like in the age of AI.
Big thanks to Erich Winkler for this one. Shadow AI is the kind of slow-motion breach that doesn’t make headlines until it’s way too late, and he nailed why banning it outright is a losing move.
If you dug this, go hit up Erich’s Substack and subscribe. Dude knows his stuff.









Feel free to ask Erich or myself any questions!
Feel free to ask me or Toxsex any questions! We’ll be happy to answer them!