One layer that may end up defining this discipline is how systems handle execution boundaries.
In highly automated environments the real governance challenge is not only identifying risk or assigning responsibility. It is preserving the ability to interrupt system momentum immediately before an irreversible action occurs, when infrastructure changes, models deploy, or decisions propagate through a workflow.
I have been developing a framework called Mirror Field OS (MFOS) built around that exact point. It treats the commit boundary as a protected moment where a brief pause allows assumptions, constraints, and potential downside to be reviewed before the system proceeds.
As these roles continue to emerge, the practitioners in them will likely be responsible not only for analyzing AI risk but also for ensuring systems retain the capacity for deliberate interruption at the moment of commitment.
I appreciate it. I get home too exhausted from packing boxes after quitting my work from home gig to even comment or spread the word for lack of a better term. I've been working on it for about 2 years.
What an eye opening post! I agree with this quote - the people writing the requirements are themselves still negotiating what the role should be.
I've seen this happen in content and AI strategy too, the job exists before the job description does. Ask me what I do now, I have no idea how to articulate it into a job title.
The practitioners who define themselves early get to shape the category. Such a rare window!
i’ve seen this as a problem. in fact, i’m in the position where i participate in the loop interviews, and have seen first hand the disconnect on ai job titles and expectations.
Great post team and loving the guest post format! In terms of the clearly poorly constructed job listings do you think this is because there is a huge disconnect between HR and technical requirements? Like even more so than over the last decade or so? I also think that hiring is going to be way more focussed on human skills than technical competence even in these highly specialised fields and would not be surprised if cybersecurity started hiring a bunch of poets and humanities graduates as the most to technical excellence gets reduced...
i personally think yes, there is a disconnection. but also so many companies want “ai” in everything and don’t understand how to hire the talent for it.
i also think in the next 5 or so year soft skills will be more important than ever. the engineering can be automated but a person needs to brief and debrief, explain to various audiences etc.
Sam, thanks for reading! Tox nailed it on the hype. Here’s what our research adds:
- The Disconnect is Deeper Than HR: Technical leaders themselves are still negotiating where AI security actually lives in the org chart. Mismatched job descriptions happen because the market hasn't standardized the roles yet.
- Broadening the Talent Pool: You and tox are right that communication and policy translation are becoming critical. As roles like the AI Governance Lead mature, there is absolutely a growing space for the critical thinking and analytical skills fostered in the humanities, even if it's not a wholesale pivot.
- The Technical Moat Remains: At the same time, the technical barrier isn't vanishing. The highest compensation premiums currently demand deep offensive security and ML engineering for complex tasks like model probing and data poisoning.
Ultimately, we're seeing a bifurcation: widening governance paths where diverse backgrounds can shine, running parallel to an intense technical arms race at the model layer.
Feel free to follow up with any questions!
One layer that may end up defining this discipline is how systems handle execution boundaries.
In highly automated environments the real governance challenge is not only identifying risk or assigning responsibility. It is preserving the ability to interrupt system momentum immediately before an irreversible action occurs, when infrastructure changes, models deploy, or decisions propagate through a workflow.
I have been developing a framework called Mirror Field OS (MFOS) built around that exact point. It treats the commit boundary as a protected moment where a brief pause allows assumptions, constraints, and potential downside to be reviewed before the system proceeds.
As these roles continue to emerge, the practitioners in them will likely be responsible not only for analyzing AI risk but also for ensuring systems retain the capacity for deliberate interruption at the moment of commitment.
this is great. and Mirror Field OS sounds really interesting. do you post more about it?
I have a few posts about it yes.
Makes it two of us. :D Interested to understand more about what you're building Trenton.
I published a light version here: https://poe.com/MirrorFieldReflector
Generally speaking MFOS is a governance control that preserves a human pause right before an automated system commits to an irreversible action.
i’ll go take a look :)
I appreciate it. I get home too exhausted from packing boxes after quitting my work from home gig to even comment or spread the word for lack of a better term. I've been working on it for about 2 years.
What an eye opening post! I agree with this quote - the people writing the requirements are themselves still negotiating what the role should be.
I've seen this happen in content and AI strategy too, the job exists before the job description does. Ask me what I do now, I have no idea how to articulate it into a job title.
The practitioners who define themselves early get to shape the category. Such a rare window!
i’ve seen this as a problem. in fact, i’m in the position where i participate in the loop interviews, and have seen first hand the disconnect on ai job titles and expectations.
Must be mind blowing!
It definitely makes it hard to find the correct candidate lol.
Great post!
This is now the updated version of "AI will create new jobs but nobody knows what those jobs will be"
And tomorrow it continues in more details with the follow up piece for this article! Glad you liked it!
Oh there is a follow up also?
Yup, and here's the link. https://nextkicklabs.substack.com/p/ai-security-roles-expectations
Yes there is! Follow up on his substack drops tomorrow!
🔥🔥🔥🔥
Great post team and loving the guest post format! In terms of the clearly poorly constructed job listings do you think this is because there is a huge disconnect between HR and technical requirements? Like even more so than over the last decade or so? I also think that hiring is going to be way more focussed on human skills than technical competence even in these highly specialised fields and would not be surprised if cybersecurity started hiring a bunch of poets and humanities graduates as the most to technical excellence gets reduced...
i personally think yes, there is a disconnection. but also so many companies want “ai” in everything and don’t understand how to hire the talent for it.
i also think in the next 5 or so year soft skills will be more important than ever. the engineering can be automated but a person needs to brief and debrief, explain to various audiences etc.
would love fernando’s thoughts here
Sam, thanks for reading! Tox nailed it on the hype. Here’s what our research adds:
- The Disconnect is Deeper Than HR: Technical leaders themselves are still negotiating where AI security actually lives in the org chart. Mismatched job descriptions happen because the market hasn't standardized the roles yet.
- Broadening the Talent Pool: You and tox are right that communication and policy translation are becoming critical. As roles like the AI Governance Lead mature, there is absolutely a growing space for the critical thinking and analytical skills fostered in the humanities, even if it's not a wholesale pivot.
- The Technical Moat Remains: At the same time, the technical barrier isn't vanishing. The highest compensation premiums currently demand deep offensive security and ML engineering for complex tasks like model probing and data poisoning.
Ultimately, we're seeing a bifurcation: widening governance paths where diverse backgrounds can shine, running parallel to an intense technical arms race at the model layer.
excellent break down here Fernando!
Highly revealing piece! Thanks for working on it. Appreciated the footnotes. Great job!
Fernando did an amazing job with the footnotes. you can really tell how well he researched this deep dive.
https://www.darksignal.co/p/anatomy-of-a-clone-fake-red-alert?utm_source=share&utm_medium=android&r=tlm76&triedRedirect=true
Interesting adventure for the good 😊
thank you!