Nobody Knows What to Call This Job Yet. But Everyone Is Hiring for It. [Special Guest Post]
The new discipline forming in real time: why regulators are mandating jobs the industry still can’t name. ToxSec guest post.
In a world where machines lie convincingly, someone has to call their bluff. The problem? Nobody agrees on what to call the job that stops them.
“AI Security Engineer.” “AI Governance Lead.” “AI Red Team Lead.”
These titles are everywhere, but the definitions are still being written in real time. While regulators write headcount mandates into law and the talent gap hits 4.8 million.
In this guest post, the author breaks down the real story: exploding demand, regulatory pressure, massive pay premiums, and why early 2026 is a rare career window.
If you want to be featured in your own guest post, send me a message here.
If you enjoy this, Fernando Lucktemberg is hosting part two. Check out part two on Fernando’s substack right here.
Here’s the piece. -Tox
Disclaimer
This article is intended for informational purposes and reflects the state of published research and industry practice as of early 2026. It is not professional security advice. Your specific environment, threat model, and regulatory obligations will shape how these principles apply to your situation.
TL;DR
You have probably noticed the strange job titles popping up in your feed lately. Roles like AI Security Engineer and AI Governance Lead are everywhere, yet nobody seems to agree on what they actually mean. I have spent the last few weeks digging into this, and the truth is fascinating. The cybersecurity world is currently missing nearly five million professionals, and AI is suddenly the number one skills gap everyone is terrified of.
We are watching a brand new professional discipline form in real time. Organizations are scrambling because regulators are mandating specific AI security roles, but the talent pipeline simply does not exist yet. There is no universally agreed upon credential stack, and the threat model is expanding faster than we can train people. If you are a traditional security practitioner, a data scientist, or just someone looking for a massive career opportunity, this is your moment. The gap between demand and supply is creating unprecedented wage premiums. You do not need a perfect resume to jump in. You just need to understand the attack surface before the rest of the market catches up.
The Itch: Why This Matters Right Now
You have probably felt it before you had words for it.
A job posting lands in your feed. The title says something like “AI Security Engineer” or “AI Governance Lead.” You read the requirements. Half of them look like application security. A quarter look like data science. The rest reference frameworks you have heard of but have not worked with directly. You close the tab. You open it again.
Here is what is actually happening beneath the surface of that feeling.
You are not imagining the confusion. That posting exists in a market where the people writing the requirements are themselves still negotiating what the role should be. The title is real. The need is real. The definition is not finished yet.
And you are far from alone. The cybersecurity workforce gap currently sits at 4.8 million unfilled positions globally, and the workforce itself has stopped growing. It is flat at 5.5 million active professionals. At the same time, 95% of cybersecurity teams report at least one significant skills gap, and the skill sitting at the top of that list, for the first time in the history of this survey, is artificial intelligence. Not cloud. Not zero trust. AI, at 41% of respondents naming it as their primary deficit.
The organizations posting those confusing job ads are not confused about the threat. They are confused about the role. There is a difference, and that difference is what this article is about.
The field of AI security is crystallizing right now, in February 2026, with regulatory deadlines approaching, compensation premiums widening, and credential bodies scrambling to define what competency even means. You are not late. You are early enough that the definitions are still being written, and late enough that the demand is undeniably real.
The Deep Dive: The Struggle for a Solution
Let me take you back to where this started, because the origin story matters for understanding the shape of the current mess.
Around 2020, a group of researchers at the Berryville Institute of Machine Learning published something quietly important: a structured catalogue of 78 risks specific to machine learning systems. If you have ever tried to apply a traditional security framework to an ML pipeline and felt the concepts stop fitting, this is why. Not risks to systems that happen to use ML. Risks that only exist because ML is in the stack. Risks like training data poisoning, model inversion, adversarial evasion, and supply chain compromise at the model layer. Before this work, there was no shared language for what an attacker could do to an AI system specifically. Security teams were applying conventional frameworks to a fundamentally different attack surface and discovering, often too late, that the frameworks did not fit.
Four years later, that original catalogue was extended to cover 81 additional risks specific to large language models. The problem had not shrunk; it had compounded.
MITRE, the organization behind the ATT&CK framework that every SOC analyst knows, built the equivalent for AI systems. If your current work involves threat modeling, adversary emulation, or detection engineering, this is the framework you will need to become fluent in. It is called ATLAS, and as of late 2025 it documents 15 tactics, 66 techniques, 46 sub-techniques, and 33 real-world case studies drawn from actual incidents involving AI systems. In October 2025, 14 new techniques were added specifically for autonomous AI agents, because agentic architectures started hitting production environments fast enough to generate their own incident record.
Think of ATLAS as the field’s evidence log. Every entry in it represents something that actually happened to a real AI system operated by a real organization. The log is growing. The people who need to interpret it and respond to it are in short supply.
This is the first pressure point: the threat model is specific, documented, and expanding. The workforce capable of working with it is not.
The second pressure point is organizational.
Ask ten security leaders where AI security lives in their org chart and you will get ten different answers. Some have embedded a specialist inside the CISO organization. Some have distributed the responsibility between the AI engineering team and the security team, with no one clearly owning the intersection. A handful of AI-native organizations have built standalone AI security functions that report directly to a Chief AI Officer.
The hiring data sharpens the picture further. ISACA’s 2025 data shows that 47% of cybersecurity teams are now involved in AI governance (up from 35%), and 40% are involved in AI implementation (up from 29%), with both figures concentrated heavily at enterprise organizations. These numbers are rising fast, but they describe involvement, not ownership. Being involved in AI governance is not the same as having a defined seat at the table for it. The startup and mid-market world is mostly improvising, typically handing these responsibilities to a senior AppSec engineer who has been upskilling on weekends.
The third pressure point is regulatory, and this one has teeth.
The EU AI Act, which entered into force in July 2024, does something unusual for technology regulation. It writes job descriptions. Article 9 requires that providers of high-risk AI systems maintain dedicated personnel within their risk management infrastructure. Article 17 requires trained staff in quality management for those same systems. Article 4 mandates AI literacy for all staff working with AI, not just the specialists. Article 31 requires the permanent availability of sufficient scientific personnel for conformity assessment.
Read that again. Permanent availability. Sufficient scientific personnel. This is not a vague directive to “take AI risk seriously.” This is a headcount mandate with a compliance deadline extending through August 2027.
The irony, and it is a sharp one, is that regulators have become more specific about what AI security roles should do than the industry itself has managed to be. The EU AI Act has effectively published an org chart requirement for high-risk AI providers. NIST’s AI Risk Management Framework requires documented roles and lines of communication for AI risk under its GOVERN function, alongside mandatory training for identified personnel. These frameworks are not describing jobs that the market spontaneously created. They are compelling organizations to create jobs that the market had not yet fully defined.
The fourth pressure point is compensation, and it is where the stakes become personal.
PwC’s AI Jobs Barometer found that roles requiring AI proficiency command a 56% wage premium over comparable non-AI roles. That premium grew substantially from the prior year’s measurement. Applied to security, a senior AI Security Engineer should theoretically command somewhere between 50% and 80% above a comparably experienced traditional Security Engineer.
The Levels.fyi data for security software engineers broadly shows an average total compensation of $204,000, with Google’s security engineers ranging from $198,000 to $608,000 depending on seniority. The dedicated “AI Security Engineer” title has insufficient salary submissions on major platforms to generate reliable benchmarks; the Glassdoor figure for that specific title is based on two submissions and should be treated accordingly. The gap between the compensation signal and the data quality is itself a signal: the role is real enough to command a premium, but new enough that the market has not yet generated the volume of data needed to price it precisely.
Now consider what this means for the role taxonomy.
Three maturity tiers are currently visible in the job market. At the established tier sit AI Security Engineer, AI Governance Lead, and AI Red Team Lead. These appear in hundreds of active postings, carry relatively consistent role definitions, and have identifiable career paths leading into them. At the emerging tier sit AI Security Specialist, AI Ethics and Compliance Officer, and LLM Security Engineer. Demand is growing but role definitions vary significantly between employers. At the speculative tier sit titles like AI SOC Orchestrator and Quantum-AI Security Specialist. These are largely vendor product categories that have been dressed in human-role clothing. They describe software products more than they describe people.
Gartner’s 2025 Hype Cycle placed AI Trust, Risk, and Security Management at the “Peak of Inflated Expectations,” projecting mainstream adoption within five years but also signaling that some current role demand in this category reflects speculative market enthusiasm rather than confirmed organizational need.
The confusion between these tiers is not academic. A practitioner applying to the wrong tier makes career decisions based on false signals. A hiring manager building a job description from speculative titles creates a posting that attracts the wrong candidates or no candidates at all.
Here is the specific delineation that matters most for anyone currently working in application security or penetration testing. The AI/ML Security Engineer builds security into ML pipelines from the start. This role requires ML engineering fluency: understanding how training environments can be compromised, how inference endpoints can be attacked, how data pipelines can be manipulated. The AI Security Specialist evaluates deployed AI systems against risk frameworks and governance standards. This role requires assessment and audit skills applied through an AI-specific lens. Organizations with both roles tend to embed the Engineer in the AI product team and the Specialist in the security organization. These are not the same job wearing different hats. They require meaningfully different skill compositions.
The AI Red Team function sits at the highest premium and the thinnest supply. Genuine AI red teamers require the intersection of offensive security tradecraft and ML engineering capability: model probing, adversarial example generation, data poisoning simulation. The career path runs through conventional AppSec or penetration testing, then requires deliberate ML upskilling, then MITRE ATLAS fluency. Microsoft, Google, Anthropic, and Meta all operate dedicated AI red teams. The practitioner pool feeding those teams is small and growing slowly.
The Resolution: Your New Superpower
Here is what this all resolves to, practically.
The talent pipeline feeding AI security roles is thin in specific, predictable ways. The largest cohort of practitioners entering the field comes from application security and software security engineering. The second largest comes from data science practitioners who developed a security interest. The third comes from traditional GRC and risk professionals moving into governance-track roles. Almost nobody is entering from ML engineering with pre-existing security depth. That reverse pipeline is the thinnest and the most valuable.
ISACA’s data makes a quiet but significant point: 46% of current cybersecurity professionals transitioned from non-security fields. The field has always absorbed career changers. AI security is absorbing them again, and the career changers who arrive with ML engineering backgrounds will find the least competition and the highest premiums.
If you are coming from outside security entirely, the entry vectors are more accessible than the job descriptions suggest. Data scientists, cloud engineers, and software developers who can demonstrate they understand how AI systems fail under adversarial conditions are already legible to hiring managers. You do not need a security title in your past. You need evidence that you understand the attack surface. That is the signal hiring managers are reading for right now.
The credentialing infrastructure is just beginning to form. CompTIA launched SecAI+, the first vendor-neutral AI security certification, on February 17, 2026. ISACA’s Advanced in AI Security Management credential targets the governance track. Neither of these is sufficient alone. No certification yet addresses AI red teaming or offensive AI security specifically, which is why MITRE ATLAS fluency has become the de facto standard for practitioners in that space, unvalidated by any formal examination.
This matters for how you position yourself right now. 84% of hiring managers are using skill-based assessments rather than credential screening. The absence of a definitive AI security certification is not a barrier; it is an opening. Demonstrated capability is being valued over credential accumulation in a field where the credential stack is still being assembled.
Early investment in cross-training programs is already separating the organizations that will have functional AI security teams in 2028 from those still searching. Only 29% of enterprises currently train non-security staff for security roles, down from 41% in the prior year. The pipeline is narrowing at the entry point while demand at the senior level accelerates. That asymmetry will not self-correct quickly.
For the practitioner evaluating a pivot: the roles are real, the premiums are real, and the field is early enough that foundational competency in MITRE ATLAS, NIST AI RMF, and EU AI Act requirements constitutes meaningful differentiation today. The window where that combination is rare will not stay open indefinitely.
For the hiring manager building a team: the org model question matters before the job description question. Embedded, federated, or standalone: your choice of structure determines which roles you need and in what order you need them.
For the newcomer: the credential map is being drawn in real time. The practitioners who understand both the threat model and the governance framework before a unified certification captures that combination will have defined the field before it codified itself.
That is exactly why I am pairing with Fernando at Next Kick Labs for the follow-up. He has mapped the roles taking shape inside this field at a level of detail this piece could only gesture at. Each one of the mapped roles carries a different entry point, a different skill requirement, and a different hiring signal. His piece drops tomorrow and maps all of it: what these roles actually expect from you, background by background, and which experience signals hiring managers are treating as meaningful substitutes for credentials that do not yet exist.
Fact-Check Appendix
Statement: The global cybersecurity workforce gap sits at 4.8 million unfilled positions, with the active workforce flat at 5.5 million. Source: ISC2 2024 Cybersecurity Workforce Study | https://www.isc2.org/Insights/2024/10/ISC2-2024-Cybersecurity-Workforce-Study
Statement: 95% of cybersecurity teams report at least one significant skills gap; AI is the number one skills need at 41% of respondents. Source: ISC2 2025 Cybersecurity Workforce Study | https://www.isc2.org/Insights/2025/12/2025-ISC2-Cybersecurity-Workforce-Study
Statement: The BIML LLM Architectural Risk Analysis identified 81 LLM-specific risks. Source: BIML LLM Architectural Risk Analysis, McGraw et al., January 2024 | https://berryvilleiml.com/results/BIML-LLM24.pdf
Statement: MITRE ATLAS catalogs 15 tactics, 66 techniques, 46 sub-techniques, and 33 case studies; 14 new agent techniques were added in October 2025. Source: MITRE ATLAS |
https://atlas.mitre.org/
Statement: 47% of cybersecurity teams now report involvement in AI governance, up from 35%; 40% are involved in AI implementation, up from 29%; AI governance demand is concentrated at enterprise scale. Source: ISACA State of Cybersecurity 2025 | https://www.isaca.org/resources/state-of-cybersecurity
Statement: The EU AI Act (Regulation 2024/1689) requires dedicated personnel under Article 9, trained staff under Article 17, AI literacy under Article 4, and permanent availability of sufficient scientific personnel under Article 31; compliance phasing extends through August 2027. Source: EU AI Act, Regulation 2024/1689 | https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
Statement: PwC’s AI Jobs Barometer found that roles requiring AI proficiency command a 56% wage premium over comparable non-AI roles. Source: PwC Global AI Jobs Barometer, via World Economic Forum, November 2025 | https://www.weforum.org/stories/2025/11/cybersecurity-ai-professionals-workers/
Statement: Levels.fyi reports an average total compensation of $204,000 for Security Software Engineers; Google Security Software Engineers range from $198,000 to $608,000 by seniority. Source: Levels.fyi | https://www.levels.fyi/t/software-engineer/focus/security
Statement: Gartner’s 2025 Hype Cycle placed AI Trust, Risk, and Security Management at the “Peak of Inflated Expectations,” projecting mainstream adoption within five years. Source: Gartner Hype Cycle for AI 2025 (press release) | https://www.gartner.com/en/newsroom/press-releases/2025-08-05-gartner-hype-cycle-identifies-top-ai-innovations-in-2025
Statement: 46% of current cybersecurity professionals transitioned from non-security fields; only 29% of enterprises train non-security staff for security roles, down from 41%. Source: ISACA State of Cybersecurity 2025 | https://www.isaca.org/resources/state-of-cybersecurity
Statement: 84% of hiring managers use skill-based assessments rather than credential screening. Source: ISC2 2025 Cybersecurity Hiring Trends | https://www.isc2.org/Insights/2025/06/cybersecurity-hiring-trends-study
Statement: CompTIA SecAI+ (CY0-001) launched on February 17, 2026 as the first vendor-neutral AI security certification. Source: CompTIA SecAI+ launch, PR Newswire, February 2026 | https://www.prnewswire.com/news-releases/where-ai-and-cybersecurity-converge-introducing-comptia-secai-302689399.html
Top 5 Prestigious Sources
ISC2 Cybersecurity Workforce Study (2023, 2024, 2025) | https://www.isc2.org/Insights/2025/12/2025-ISC2-Cybersecurity-Workforce-Study The largest annual survey of cybersecurity professionals globally, covering 16,000+ respondents. The definitive longitudinal dataset on workforce gaps and skills deficits.
ISACA State of Cybersecurity 2025 | https://www.isaca.org/resources/state-of-cybersecurity 3,800+ respondent global survey from one of the most established credentialing and governance bodies in information security.
EU AI Act (Regulation 2024/1689) | https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng Legally binding regulation from the European Parliament with the most specific workforce accountability mandates of any current AI governance framework.
MITRE ATLAS | https://atlas.mitre.org/ The authoritative adversarial threat taxonomy for AI systems, maintained by MITRE with contributions from 16 member organizations including Microsoft, CrowdStrike, and JPMorgan Chase.
PwC Global AI Jobs Barometer (via WEF) | https://www.weforum.org/stories/2025/11/cybersecurity-ai-professionals-workers/ Cross-industry compensation analysis from PwC documenting the 56% wage premium for AI-proficient roles, representing one of the most rigorous quantitative signals on AI labor market dynamics.
Thanks to our guest for the sharp breakdown. If you liked his content, give him a subscription.
Have comments or questions for the author? Leave him a message here.







Feel free to follow up with any questions!
Great post team and loving the guest post format! In terms of the clearly poorly constructed job listings do you think this is because there is a huge disconnect between HR and technical requirements? Like even more so than over the last decade or so? I also think that hiring is going to be way more focussed on human skills than technical competence even in these highly specialised fields and would not be surprised if cybersecurity started hiring a bunch of poets and humanities graduates as the most to technical excellence gets reduced...