top of page

Boards & AI – Insight or Validation?

 

Whether AI challenges strategy or is used to confirm it.

 

Artificial intelligence is often presented as a tool that will help leaders make better decisions.

But there is a subtle governance aspect behind that promise.

Do boards use AI to discover information… or to validate what they already believe?

The distinction matters more than it seems.


 

AI as a strategic confirmation tool

 

In many organizations, the real question asked of AI systems is not:

“What does the data reveal about our strategy?”

It is closer to:

“How can AI help confirm that our strategy is the right one?”

Patterns like these are increasingly common:

• A board wants to invest in AI → analyses are commissioned to justify the investment.
• A leadership team plans cost reductions → models are asked to generate scenarios supporting the restructuring.
• A company wants to demonstrate innovation → AI becomes part of the strategic narrative rather than a critical tool.

In these situations, AI risks becoming something unintended:

a machine that legitimizes decisions already taken.


 

The “AI authority bias”

 

When an answer comes from an AI system or a sophisticated model, it often carries an aura of authority.

Outputs appear objective, scientific and often neutral.

Yet AI systems never operate in a vacuum.

The questions asked shape the answers, the data selected shapes the analysis and basically the assumptions embedded in prompts or models frame the conclusions.

In other words, the perceived neutrality of AI can in fact reinforce the beliefs that leadership teams already hold. This creates what might be called an AI authority bias.


 

When boards prefer reassurance to contradiction

 

Effective boards use information to challenge assumptions and stress-test strategic thinking.

But organizational dynamics often encourage the opposite:

• dashboards (that reassure)
• KPIs (that confirm trajectory)
• analyses (that support the narrative)

In that environment, AI can easily become an intelligent mirror of leadership convictions, rather than an instrument of productive contradiction.


 

The real governance question

 

The issue is therefore not: “Does AI produce the truth?”

The deeper governance question is simpler: Is the board willing to be challenged by what AI reveals?

If the answer is no, AI becomes a tool of narrative validation.

If the answer is yes, AI can become a powerful instrument for strategic stress testing.

Mitigating the confirmation trap

 

Some organizations are beginning to experiment with governance practices designed to reduce this risk.

One approach adapts a concept from cybersecurity: red teaming.

Instead of asking AI to support the current strategy, boards explicitly ask systems to challenge it:

• identify fragile assumptions in the strategic plan
• generate opposing scenarios
• stress-test business models under adverse conditions
• surface unintended consequences

In other words, AI is deliberately used against the organization’s own beliefs.

Other governance practices can reinforce this discipline:

• requiring alternative AI-generated scenarios alongside official analyses
• separating the teams producing AI analysis from those proposing strategic decisions
• including “contradiction sessions” in board discussions where AI outputs are used to question prevailing assumptions

These mechanisms shift AI from a validation tool to a structured challenger of leadership thinking.

Executive Reflection

 

Artificial intelligence expands analytical capability.

But whether it improves decision quality depends less on the technology itself than on how leaders choose to use it.

In the end, the governance question may be straightforward:

Do we want AI to confirm our convictions, or to test them?

Because the real risk may not be that AI is wrong but that it may becomes too convenient when it agrees with us.

Igor Allinckx

Board Governance · AI & Humanity

March 2026

Related insights

Why More AI Literacy Is Not Enough
Why Models Generalise - Organisational Power and Risk
AI Governance and Responsibility

Back to Insights

Part of an ongoing exploration of governance, AI, and human judgment.

bottom of page