top of page

AI systems rarely fail dramatically. They drift.

 

Why ethical failure can occur even when systems perform correctly.

 

Most governance systems are designed to detect failure. Investigations search for malfunction, negligence, bias, or procedural breach.

AI-enabled environments introduce a more uncomfortable condition. Systems can perform exactly as designed - with accurate data, a model performing well and processes following protocal - yet, the outcome may still be problematic.


 

When everything works

 

Ethical failure in AI-mediated environments does not necessarily arise from error, bias, or misconduct. It can emerge when correct decisions accumulate in directions that slowly weaken institutional stability, legitimacy, or trust.

Nothing appears broken. And yet the broader system may begin to erode.

This is ethical failure without error.


 

Local success, systemic fragility

 

AI systems optimise locally. They improve predictions, reduce cost, accelerate execution, and increase efficiency.

Boards often evaluate outcomes through similar lenses: performance indicators, productivity metrics, and measurable gains.

But harm often emerges at a different scale:
 

  • Institutional trust may weaken.

  • Tacit expertise may erode.

  • Organisational resilience may thin.

  • Externalities may accumulate beyond reporting lines.

No individual decision is irrational but the cumulative direction may still become destabilising.


 

Metrics and their limits

 

Many organisations attempt to address this gap by introducing new metrics: trust indicators, ESG frameworks, stakeholder measures, engagement surveys.

These signals can be valuable. But they cannot fully resolve the ethical challenge.

Once translated into optimisation targets, even legitimacy indicators risk being absorbed into the same performance logic that produced the fragility.

Metrics can help signal drift. They cannot replace responsibility.


 

Explainability is not answerability

 

Explainability clarifies how an outcome was produced. It answers an epistemic question.

Ethics asks a different one: Who remains responsible for the consequences?

An organisation may be able to explain every step of an AI-mediated decision chain and still face ethical failure if no one remains visibly accountable for its effects.

Understanding is not ownership.
 

Why drift escapes governance

 

Dramatic failures trigger response. They produce investigation, reform, and accountability.

Drift behaves differently.

It accumulates through incremental optimisation, distributed decisions, and reasonable trade-offs. Each step appears justified. Each improvement is defensible.

The direction becomes visible only slowly.

By the time the pattern emerges, no single decision appears responsible for the outcome.

We explore this in more details here: misalignment does not always appear as failure


 

Executive Reflection

 

AI systems rarely collapse in ways that immediately reveal their ethical implications.

More often, they reshape institutions gradually through countless reasonable decisions.

Boards therefore do not govern only failure.
They govern drift.

The ethical challenge is not only to correct what breaks, but to remain attentive to the direction that “repeated correctness” is taking the institution.

Igor Allinckx

Board Governance · AI & Humanity

March 2026

Related insights

Back to Insights

Part of an ongoing exploration of governance, AI, and human judgment.

bottom of page