27 Comments
User's avatar
ToxSec's avatar

Feel free to follow up with any questions!

Trenton Ian Cook's avatar

One layer that may end up defining this discipline is how systems handle execution boundaries.

In highly automated environments the real governance challenge is not only identifying risk or assigning responsibility. It is preserving the ability to interrupt system momentum immediately before an irreversible action occurs, when infrastructure changes, models deploy, or decisions propagate through a workflow.

I have been developing a framework called Mirror Field OS (MFOS) built around that exact point. It treats the commit boundary as a protected moment where a brief pause allows assumptions, constraints, and potential downside to be reviewed before the system proceeds.

As these roles continue to emerge, the practitioners in them will likely be responsible not only for analyzing AI risk but also for ensuring systems retain the capacity for deliberate interruption at the moment of commitment.

ToxSec's avatar

this is great. and Mirror Field OS sounds really interesting. do you post more about it?

Trenton Ian Cook's avatar

I have a few posts about it yes.

Fernando Lucktemberg's avatar

Makes it two of us. :D Interested to understand more about what you're building Trenton.

Trenton Ian Cook's avatar

I published a light version here: https://poe.com/MirrorFieldReflector

Generally speaking MFOS is a governance control that preserves a human pause right before an automated system commits to an irreversible action.

ToxSec's avatar

i’ll go take a look :)

Trenton Ian Cook's avatar

I appreciate it. I get home too exhausted from packing boxes after quitting my work from home gig to even comment or spread the word for lack of a better term. I've been working on it for about 2 years.

Mia Kiraki 🎭's avatar

What an eye opening post! I agree with this quote - the people writing the requirements are themselves still negotiating what the role should be.

I've seen this happen in content and AI strategy too, the job exists before the job description does. Ask me what I do now, I have no idea how to articulate it into a job title.

The practitioners who define themselves early get to shape the category. Such a rare window!

ToxSec's avatar

i’ve seen this as a problem. in fact, i’m in the position where i participate in the loop interviews, and have seen first hand the disconnect on ai job titles and expectations.

Mia Kiraki 🎭's avatar

Must be mind blowing!

ToxSec's avatar

It definitely makes it hard to find the correct candidate lol.

Matija Vidmar's avatar

Great post!

This is now the updated version of "AI will create new jobs but nobody knows what those jobs will be"

Fernando Lucktemberg's avatar

And tomorrow it continues in more details with the follow up piece for this article! Glad you liked it!

Matija Vidmar's avatar

Oh there is a follow up also?

ToxSec's avatar

Yes there is! Follow up on his substack drops tomorrow!

ToxSec's avatar

🔥🔥🔥🔥

Dr Sam Illingworth's avatar

Great post team and loving the guest post format! In terms of the clearly poorly constructed job listings do you think this is because there is a huge disconnect between HR and technical requirements? Like even more so than over the last decade or so? I also think that hiring is going to be way more focussed on human skills than technical competence even in these highly specialised fields and would not be surprised if cybersecurity started hiring a bunch of poets and humanities graduates as the most to technical excellence gets reduced...

ToxSec's avatar

i personally think yes, there is a disconnection. but also so many companies want “ai” in everything and don’t understand how to hire the talent for it.

i also think in the next 5 or so year soft skills will be more important than ever. the engineering can be automated but a person needs to brief and debrief, explain to various audiences etc.

would love fernando’s thoughts here

Fernando Lucktemberg's avatar

Sam, thanks for reading! Tox nailed it on the hype. Here’s what our research adds:

- The Disconnect is Deeper Than HR: Technical leaders themselves are still negotiating where AI security actually lives in the org chart. Mismatched job descriptions happen because the market hasn't standardized the roles yet.

- Broadening the Talent Pool: You and tox are right that communication and policy translation are becoming critical. As roles like the AI Governance Lead mature, there is absolutely a growing space for the critical thinking and analytical skills fostered in the humanities, even if it's not a wholesale pivot.

- The Technical Moat Remains: At the same time, the technical barrier isn't vanishing. The highest compensation premiums currently demand deep offensive security and ML engineering for complex tasks like model probing and data poisoning.

Ultimately, we're seeing a bifurcation: widening governance paths where diverse backgrounds can shine, running parallel to an intense technical arms race at the model layer.

ToxSec's avatar

excellent break down here Fernando!

Arturo F. Munoz's avatar

Highly revealing piece! Thanks for working on it. Appreciated the footnotes. Great job!

ToxSec's avatar

Fernando did an amazing job with the footnotes. you can really tell how well he researched this deep dive.

Meenakshi NavamaniAvadaiappan's avatar

Interesting adventure for the good 😊

ToxSec's avatar

thank you!