
When AI Governance Collides with State Power
What the Anthropic Dispute Reveal About the Next Phase of Responsibility
Artificial intelligence governance is often framed as a regulatory issue.
The European Union’s AI Act reflects this logic: classify risk, impose obligations, document oversight, create traceability. It assumes that governance can be structured through law.
Recent events in the United States suggest a different trajectory.
The public confrontation between the U.S. administration and Anthropic (triggered by the company’s refusal to remove safeguards against mass domestic surveillance and fully autonomous lethal weapons without human oversight) reveals something deeper:
AI governance is no longer confined to regulation but is entering the terrain of power.
Two Logics of Governance Emerging
The EU AI Act represents governance by codified architecture. It defines risk categories and compliance pathways are formalised. Accountability chains are documented.
The system is designed to constrain harm before it scales.
The Anthropic dispute illustrates governance by executive leverage.
When a company asserts internal red lines, and the state responds by invoking national security and supply chain risk designations, the question is no longer regulatory compliance.
It becomes a confrontation between corporate conscience and state imperative.
These are two, very distinct, governance logics:
-
Normative institutional design
-
Strategic sovereign power
Boards operating globally must now understand both.
Corporate Red Lines in a Geopolitical Context
Anthropic’s position was not framed as defiance. It articulated boundaries and two of them were particularly important to them:
-
No mass surveillance of domestic populations
-
No fully autonomous lethal weapons without human involvement
These were not (only) technical constraints but (as well) governance commitments.
But when AI capabilities intersect with national defense priorities, internal commitments may collide with sovereign objectives.
The unresolved question is not whether AI can be regulated but whether corporate governance can maintain principled boundaries under state pressure.
When Compliance Is Not Enough
A company may fully comply with existing regulation and still face demands that exceed regulatory frameworks.
The EU AI Act defines high-risk systems, structures oversight and introduces documentation and audit requirements.
But regulation does not anticipate every geopolitical escalation.
In the United States case, the conflict did not arise from non-compliance from Anthropic but from its refusal to remove internal safeguards.
This suggests that AI governance is evolving beyond regulatory checklists and is becoming a matter of institutional posture.
The Board-Level Implication
Boards must now ask questions that extend beyond adoption strategy and regulatory mapping:
-
What are our non-negotiable red lines in AI deployment?
-
Who defines them?
-
Can they withstand executive or state pressure?
-
How do we reconcile sovereign demands with internal governance architecture?
-
Are we prepared for AI governance to become geopolitical?
These are not hypothetical concerns. They reflect the emerging reality of AI as strategic infrastructure.
Executive Reflection
AI systems are no longer experimental tools. They are becoming institutional actors within economies, security frameworks, and geopolitical rivalries.
When governance moves from compliance to confrontation, boards can no longer treat AI oversight as a technical or procedural function. It becomes a question of institutional identity.
In an AI-shaped world, responsibility does not diminish. In fact, it becomes structurally contested.
And when structures are contested, governance must become explicit.
Igor Allinckx
Board Governance · AI & Humanity
February 2026
Related insights
Back to Insights
Part of an ongoing exploration of governance, AI, and human judgment.