resized_iStock-2149224323

Board Development

AI raises the standard of board oversight

4 Min Read | Helle Bank Jorgensen | Last Updated: 05/05/2026

Read the article

The question boards should be asking is not what AI can do in the boardroom. It is whether they are willing to engage with what it finds."

The dominant conversation around AI in governance still focuses on capability: what the tools can do, how fast they can do it, what categories of analysis they can now perform at scale. That remains the wrong frame.

Capability is no longer the constraint. The constraint is how boards respond to what they are given.

A highly capable tool applied to a flawed process does not correct that process. It scales it!

Boards that tend to anchor on management's recommendation will find AI reinforcing that anchor, often with greater analytical confidence. Boards that move too quickly through complex decisions will find AI making that speed easier to justify, because more analysis appears to have been done.

In both cases, the appearance of rigour increases. The quality of judgement does not necessarily follow.

The question is whether the board is willing to engage with what it finds, particularly when that analysis introduces friction rather than clarity. "

That willingness has nothing to do with technology. It comes from culture, from how boards are chaired, from whether challenge is sustained or quietly redirected. In many boardrooms, the constraint is not access to insight but the discipline to stay with it long enough to test its implications. AI does not remove that constraint. It makes it more visible.

What is actually changing

Most AI applications in governance so far have focused on efficiency: faster synthesis of board packs, cleaner summaries, better access to information. They are useful but limited. They improve how boards consume information. They do not fundamentally change how boards challenge it.

The real shift begins when AI moves upstream, used not to explain the paper but to interrogate it before the discussion takes place."

When Lloyds Banking Group introduced a purpose-built AI agent into its boardroom environment, it was described as preparation support, enabling faster analysis and exposure to a broader range of perspectives before directors enter the room. That framing matters. The tool is designed to strengthen individual judgement ahead of discussion, not to substitute for it during the meeting.

AI can apply interrogation consistently across every proposal, not just the ones that happen to attract attention. It can surface the assumptions that carry the recommendation, identify where those assumptions rely on limited or unstable evidence, and highlight where the logic has not been fully tested under different conditions.

Strong directors have always done this. What changes is the consistency, the coverage, and the starting point of the conversation.

Instead of spending time understanding the paper, boards can enter the room already aware of where the pressure points sit. The discussion can move more quickly to trade-offs, to risk tolerance, and to what the organisation is prepared to accept.

Boards do not need more information. They need better-prepared conversations. That is the gap AI can close. And tools like IQ by Board Intelligence has already begun that process.

Practical implications

For boards, this raises a more precise question about how AI is being positioned in practice. Is it being used to shorten the path to a decision, or to make that path more demanding? The distinction is not subtle. It determines whether AI strengthens governance or quietly weakens it.

Several practices become important as these tools are adopted:

First, AI outputs need to enter the process as questions, not conclusions. Treating model-generated analysis as an answer closes discussion too early. Treating it as a structured challenge keeps the board in an interrogative posture.

Second, boards should use AI to test the recommendations they are inclined to accept, not just the ones they are already sceptical of. The risk is not that AI introduces bias, but that it reinforces existing bias if it is only applied selectively. Used properly, it should be directed toward the areas of highest implicit confidence. Calling out the biases that directors do not recognize.

Third, boards should expect greater transparency from management on how AI has shaped what is being presented. If AI has been used in developing a proposal, directors should understand what data informed it, what assumptions were embedded, and where human judgement has been applied or overridden. Without that visibility, it becomes harder to assess the strength of the recommendation and to exercise proper oversight.

Fourth, boards will need to recalibrate how they assess management confidence. AI-supported analysis can make recommendations appear more robust than they are, particularly when outputs are presented cleanly and with apparent precision. Directors need to look through that presentation and re-engage with the underlying uncertainty.

None of this is technically complex. All of it requires intent. It requires chairs and governance leads to set expectations about how AI is used and challenged, and to reinforce those expectations consistently over time.

Way forward

The tools will continue to improve. They will become faster, more integrated, and more embedded in how information flows to the board.

The harder question is whether boards will develop the habits to use them well, or whether they will become dependent on outputs they are not equipped to challenge."

The deployment at Lloyds Banking Group has created a visible reference point. Other boards will face increasing pressure to demonstrate that they are adopting similar tools. That pressure is understandable, but it carries a familiar risk.

Adoption without discipline leads to governance in form rather than substance.

The boards that navigate this well will be the ones that use AI to raise the standard of their own challenge, not to reduce the effort required to meet it.

Judgement under uncertainty does not become easier as analysis improves. It becomes more exposed."

Once assumptions are visible, once risks are mapped, once scenarios are laid out, the board still must decide what it is willing to accept. That decision remains human and carries accountability that cannot be delegated.

AI surfaces the quality of a board's thinking. It does not improve it by default. That depends on how rigorously directors engage with what it reveals."

Build a future-fit board skill set

Empower your board and directors with upskilling solutions that give them the insight they need to succeed.

Find out more