When a director relies on an AI-generated summary to prepare for a board meeting, are they still exercising the independent judgement that section 172 requires of them? It's a question that regulators, auditors, and courts may soon be asking.
AI tools are already being used to summarise board papers, spot trends in board materials, and highlight what might require a director's attention. Boards are dealing with more information than ever before and the appeal is obvious. Our research shows the average board pack now runs to 220 pages, up 27% since 2019.
As AI becomes embedded in how directors prepare and how boards reach decisions, the legal and governance implications deserve careful thought. We spoke to Professor Andrew Likierman, Professor of Management Practice at London Business School, whose work on independent judgement makes him well placed to assess what AI means for that duty. We also spoke to Jonathan Knight, Chief Product Officer at Board Intelligence, to understand what responsible AI governance looks like in practice.
Section 172 and the duty of independent judgement
Under section 172 of the UK Companies Act 2006, every director has a duty to act in the way they consider, in good faith, would be most likely to promote the success of the company for the benefit of its members. This duty, or one closely equivalent to it, exists in most major corporate law jurisdictions, including Australia, Canada, and the US. Critically, it is not a collective duty. It falls on each director individually. Each director must exercise independent judgement, forming their own view based on information they have engaged with.
This is where AI introduces new complexity. If a director's understanding of a paper is shaped entirely by an AI-generated summary, and that summary has framed the issue in a particular way or omitted an alternative perspective, can the director demonstrate they exercised independent judgement? And if a decision is later challenged by a regulator, an auditor, or a court, what evidence exists to show that the director did more than accept what the AI tool told them?
The risks AI poses to independent judgement
Professor Likierman identifies two risks that concern him most.
The first is the tendency to defer to consensus. "How do you know when a director is exercising independent judgement rather than deferring to the consensus?" he asks. This is already a challenge in any boardroom, but AI has the potential to exacerbate it. If every director is reading the same AI-generated summary of a paper, they may arrive at the meeting having formed the same initial impression, shaped by the same framing choices embedded in the model. AI models are trained on existing patterns and tend to surface the most probable interpretation of a situation rather than the most challenging one. The framing a director receives may therefore reflect a particular viewpoint rather than independent analysis, without the director ever realising it.
The second risk is breadth without depth. Professor Likierman describes a pattern where directors feel informed because they have processed a lot of information quickly, without having truly interrogated any of it. AI tools, used passively, could make this worse. Consider a director who has used AI to skim-read summaries of ten agenda items in an hour. They arrive at the meeting with a surface-level view of each topic but without the depth to push back on management, spot a flawed assumption, or ask the question that changes the outcome of a discussion. Under section 172, that gap matters.
Is the existing framework adequate?
Professor Likierman's view here is measured. The section 172 framework is not broken, but it was designed for a world that looks very different to the one we're in now. The law sets the standard, but it does not specify the process by which a director meets it.
If a regulator or court examines whether a director exercised appropriate challenge, they will look for a record. What information did the director have access to? How was it framed? What questions did they ask? What alternative positions did they consider? AI tools that assist with preparation need to support that record, not obscure it.
What good practice looks like
Professor Likierman is clear about what responsible use of AI by a director should involve. It means reading AI-generated insights as a starting point, not an endpoint. It means asking what AI has not surfaced, what alternatives have not been presented, and whether the framing of a recommendation reflects management's preferences rather than a balanced view of the options.
Used well, AI can strengthen a director's preparation. It can summarise papers and surface their key points and asks, provide contextual recaps of past packs to help directors spot trends across meeting cycles and governance forums, and identify what requires attention, from unmentioned options to forgotten stakeholders. The question is whether the director then applies their own judgement to that foundation.
How that is built into practice matters. Jonathan Knight, Chief Product Officer at Board Intelligence, explains: "The philosophy behind the development of our AI tools is that AI is an enabler, not a replacement. We always require a human author to sign off on AI-generated content. It is never part of the core document automatically. It appears in a separate box, and the human must actively review it and click to insert it."
On section 172 specifically, Knight points to features within Board Intelligence's suite of tools designed to help boards meet that duty directly. "We give a strong indication of whether each type of stakeholder covered in section 172 has been considered in the paper, and flag where it has not been covered. Users cannot randomly prompt the AI and get a summary that might miss these points. It is part of a prompt-engineered system built and tested specifically around what matters to UK regulated boards."
What the governance framework may need to catch up on
If AI is shaping what information directors see and how it is framed, should boards be required to disclose that? Should governance codes specify what a director's engagement with AI-assisted materials should look like? These are questions regulators and code-setters have not yet answered.
Boards that are serious about demonstrating they have fulfilled their duties will need to think carefully about how they document the role AI plays in their processes, and how they make clear that human judgement has been employed. The question for directors, company secretaries, and general counsels is whether they can demonstrate independent judgement before governance framework does.
With Insight Driver, surface the questions worth asking, the stakeholders worth considering, and the context worth knowing.
See Insight driverFAQs
-
What is section 172 of the Companies Act 2006?Section 172 of the Companies Act 2006 requires every director to act in the way they consider, in good faith, would be most likely to promote the success of the company for the benefit of its members. This is not a collective duty. It falls on each director individually. Each director must form their own view, based on information they have engaged with.
-
What evidence would a regulator or court look for to assess whether a director met their section 172 duty?A regulator or court examining whether a director exercised appropriate challenge would look for a clear record. What information did the director have access to? How was it framed? What questions did they ask? What alternative positions did they consider? AI tools that assist with preparation need to support that record, not obscure it.
-
What does good practice look like for a director using AI tools?Good practice means treating AI-generated insights as a starting point, not an endpoint. It means asking what the AI has not surfaced, what alternatives have not been presented, and whether the framing of a recommendation reflects management's preferences rather than a balanced view of the options. It also means testing AI outputs against the underlying papers and asking questions that probe assumptions rather than confirm them.
-
How can AI tools be designed to support section 172 compliance?Well-designed AI tools build human accountability into the process. At Board Intelligence, human authors are always required to sign off on AI-generated content before it enters the board pack. It is never inserted automatically. Users must also acknowledge before using any AI feature that they remain the decision-maker. In addition, Board Intelligence's tools are designed to flag whether each category of stakeholder covered by section 172 has been considered in a paper, and to highlight where it has not.
