now that i've published enforcement timelines for three separate regulatory deadlines, i fully expect all of them to get delayed by at least six months. you're welcome. consider this article a public service.
love the new avatar. yeah december 2027. by the time they launch it will be outdated or too late lol. good luck regulating these local models if you kick the can any more..
Solid overview. Enterprises with compliance experts and budgets will figure this out. It'll be painful and expensive, but they'll get there. What I keep thinking about is the gap below: SMEs and solo developers shipping or deploying increasingly capable AI systems without meaningful governance infrastructure. If those systems land in high-risk territory under the EU AI Act, they'll need the high-risk compliance stack across the system's lifecycle. The Act has sandboxes and microenterprise carve-outs, but no blanket SME exemption. When enforcement hits, flying without governance becomes bet-the-company.
really appreciate that Jonatan. i think you hit on an interesting perspective. ai coding is enabling a lot of smaller organizations to ship and deploy with less people. they aren’t going to have the same expert GRC staff available.
and from what i can tell, these have teeth. will be interesting to watch this unfold.
haha great reference. these are deranged compared to their first iterations. it’s better than nothing, but i think FOMO keeps them from getting too aggressive.
I literally have no doubt lol. and with the U.S. one, the federal government is trying to override California's authority. and now EU is talking about a delay to Dec 2027.
The preemption fight is the tell. When the federal response to fragmented enforcement is 'let us override the states' instead of 'here's a coherent standard,' you know nobody has the architectural answer yet. Delay doesn't fix the substrate problem, it just moves the fine date.
Compliance Labs produces mechanistic interpretability audits of fine-tuned AI models — basically, we open the model, measure every internal feature the fine-tuning created, modified, or eliminated, and produce a signed technical document of what the model actually learned. Layer by layer. Statistically validated. PhD-signed and ready to file.
The thing your readers are going to hit: Article 13 doesn't just want a policy document. It wants technical documentation of what changed between your base model and your deployed version. Most teams have no idea how to produce that. We do.
Our methodology just went into NeurIPS 2026 review. One of our core findings is that output testing alone isn't enough — a model can show major internal reorganization while outputs look totally normal on standard evals. Neither measurement alone is sufficient. (We call it the two-measurement requirement, because naming things is fun.)
If you've got readers staring down August 2 with a fine-tuned model and no documentation — http://compliance-labs.ai. The math against €35M works out pretty fast.
Great breakdown as always. The shadow AI inventory problem is going to age like milk.
now that i've published enforcement timelines for three separate regulatory deadlines, i fully expect all of them to get delayed by at least six months. you're welcome. consider this article a public service.
Lol yep they are already talking 2027 or 2028
love the new avatar. yeah december 2027. by the time they launch it will be outdated or too late lol. good luck regulating these local models if you kick the can any more..
Solid overview. Enterprises with compliance experts and budgets will figure this out. It'll be painful and expensive, but they'll get there. What I keep thinking about is the gap below: SMEs and solo developers shipping or deploying increasingly capable AI systems without meaningful governance infrastructure. If those systems land in high-risk territory under the EU AI Act, they'll need the high-risk compliance stack across the system's lifecycle. The Act has sandboxes and microenterprise carve-outs, but no blanket SME exemption. When enforcement hits, flying without governance becomes bet-the-company.
really appreciate that Jonatan. i think you hit on an interesting perspective. ai coding is enabling a lot of smaller organizations to ship and deploy with less people. they aren’t going to have the same expert GRC staff available.
and from what i can tell, these have teeth. will be interesting to watch this unfold.
“You can build whatever AI you want, but you must prove you understand the risks, control them, and can audit your decisions.”
i wonder how many can actually do this. hah.
Discussed this in Thunderous applause. It's not enough.
haha great reference. these are deranged compared to their first iterations. it’s better than nothing, but i think FOMO keeps them from getting too aggressive.
This is going to be an aggressive game of whack-a-mole
I literally have no doubt lol. and with the U.S. one, the federal government is trying to override California's authority. and now EU is talking about a delay to Dec 2027.
The preemption fight is the tell. When the federal response to fragmented enforcement is 'let us override the states' instead of 'here's a coherent standard,' you know nobody has the architectural answer yet. Delay doesn't fix the substrate problem, it just moves the fine date.
and i think the problem compounds with the speed of ai innovation. by the time these hit, they will be outdated.
Bingo, reminds me of the anti-virus industry in the 90's always one mutation behind.
Wow.. a treasure trove list in 2026 for AI Governance, thanks for sharing!!
Buddy the audit trail gap you described is real…
so we built something about it.
Compliance Labs produces mechanistic interpretability audits of fine-tuned AI models — basically, we open the model, measure every internal feature the fine-tuning created, modified, or eliminated, and produce a signed technical document of what the model actually learned. Layer by layer. Statistically validated. PhD-signed and ready to file.
The thing your readers are going to hit: Article 13 doesn't just want a policy document. It wants technical documentation of what changed between your base model and your deployed version. Most teams have no idea how to produce that. We do.
Our methodology just went into NeurIPS 2026 review. One of our core findings is that output testing alone isn't enough — a model can show major internal reorganization while outputs look totally normal on standard evals. Neither measurement alone is sufficient. (We call it the two-measurement requirement, because naming things is fun.)
If you've got readers staring down August 2 with a fine-tuned model and no documentation — http://compliance-labs.ai. The math against €35M works out pretty fast.
Great breakdown as always. The shadow AI inventory problem is going to age like milk.