Artificial intelligence (AI) has moved fast. The past couple of years have turned what felt like sci-fi into software — accessible, affordable, and already being used in boardrooms around the world.
If you’re a board director or executive with limited time and a busy agenda, it can be hard to tell what’s hype, what the risks really are, and what’s genuinely useful.
This guide cuts through the noise, offering a clear-eyed look at where generative AI is today and how it can be used to prepare board packs that set directors up to succeed.
What is generative AI?
Before we dive into how AI can be used to improve board packs, let’s get clear on the basics: what exactly is generative AI?
Generative AI refers to a category of artificial intelligence that can produce original content in response to a prompt. This includes everything from text and images to code, spreadsheets, presentations, and even music. It doesn’t simply retrieve information; it creates new material, based on what it has learned from training data.
The most familiar version of this is the large language model (LLM). These models — like OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini — are trained on massive datasets of publicly available text. They learn to predict the next word in a sequence, which allows them to generate surprisingly coherent and context-aware output. For example, you can ask an LLM to summarise a long report or reword dense content for clarity. The results aren’t always perfect, but they’re often close enough to provide a valuable head start.
Modern models like GPT-4o and Claude 3 go a step further. They offer multimodal capability, which means they can process and reason across images, charts, PDFs, spreadsheets, and even audio. They can now handle hundreds of pages of text at once, making it possible to ask questions about an entire board pack. And they offer conversational fluency. You can now interact with them in a natural back-and-forth way, asking follow-up questions, refining outputs, and clarifying intent.
These upgrades make generative AI more than a curiosity. They make it practical. For directors and executives who need to digest complex board materials quickly and apply the information in a high-stakes context, these tools are rapidly becoming part of the workflow.
Pro tip: Read our guide ‘How generative AI works — and what it means for boards’ for more detail.
What makes today’s AI different from earlier tools?
It can be hard to keep up with the pace of innovation in the AI industry. Over the past few years, three big changes stand out:
- Scale: Models like GPT-4 Turbo can now process over 300 pages of text in a single go. That means you can feed in an entire board pack and ask for a high-level summary.
- Multimodality: GPT-4o and other new models can understand and analyse not just text but also tables, charts, PDFs, and audio. That makes them more useful in handling the varied formats found in board materials.
- Cost and accessibility: Running AI models used to be expensive. Now, tools are fast, cloud-based, and increasingly embedded into the platforms that boards already use, like Board Intelligence’s board portal.
In short, these models are now business-ready, and the AI landscape is maturing rapidly. For boards, that means:
- Model upgrades: New models offer faster performance and the ability to execute more complex tasks.
- Real case studies: Boards are already using AI tools like Board Intelligence’s Insight Driver to reduce preparation time and flag reporting gaps. In doing so, and by sharing lessons learned, they’re mitigating fears and encouraging wider adoption.
- Better integration: Board management software providers are embedding AI into features that are trusted, configurable, and designed for board and governance use cases.
Such rapid progress presents exciting opportunities for boards and directors — but also risks. So, what should boards watch out for?
What are the AI risks that are most relevant to boards?
AI isn’t perfect and using it without clear oversight can introduce significant governance risks. There are three main issues boards that should keep in mind when assessing or making decisions about AI use: bias and fairness, hallucination, and data leakage.
1. Bias and fairness: AI recreates the biases it’s been trained on
AI models reflect the data they’re trained on—and if that data contains bias, the output will too. This is particularly risky in decisions around people, risk, or customer policy.
In a stark illustration of the problem, Oxford researchers found that a popular image-generating AI tool could create pictures of white doctors providing care to black children but not of black doctors doing the same for white children.
What a biased AI generates when prompted for images of a “black African doctor helping poor and sick white children”. © Alenichev, Kingori, Grietens; University of Oxford
In 2023, the UK’s Centre for Data Ethics and Innovation called for boards to put fairness and explainability at the heart of AI governance. Boards should require impact assessments for high-risk applications and ask for documentation on how model bias is mitigated.
2. Hallucination: AI makes up facts
AI models sometimes generate false or misleading content with total confidence. As Dr Haomiao Huang, an investor at renowned Silicon Valley venture firm Kleiner Perkins, puts it: “Generative AI doesn’t live in a context of ‘right and wrong’ but rather ‘more and less likely.’”
Often, that’s sufficient to generate a close-enough answer. But sometimes that leads to entirely made-up facts. And in an ironic twist for computers, AI can particularly struggle with basic maths.
An incorrect answer confidently served by ChatGPT. © Ars Technica
This can have real-life consequences. For example, in 2023, a lawyer submitted a legal brief in the US that included fictitious case law invented by ChatGPT (read more here).
In a board context, hallucinated figures or misattributed citations could lead to poor decision-making and put directors and executives in tricky legal waters. That’s why AI-generated summaries or recommendations must always be reviewed by a human.
3. Data leakage: AI leaks information
Uploading confidential materials to public AI tools (like the free version of ChatGPT) can inadvertently expose sensitive corporate information by serving it up in response to prompts from other users. Imagine your CFO asking, “We want to acquire competitor X, what should I know about them?” and that information then being served to an outsider asking “What’s up with company X?”
This isn't just a theoretical risk. In 2023, Samsung banned staff from using generative AI tools after engineers uploaded confidential source code to ChatGPT (read more here).
Running a private model within your organisation will not mitigate this risk entirely, as it would only stop leaks between your organisation and outsiders, not between people within your organisation. Imagine one of your colleagues, who doesn’t sit on the board, asking, “What should I include in my report to impress the CFO?” and the AI replying “The CFO is thinking of buying X, you should research that company.”
Boards should insist on using enterprise-grade AI platforms with security controls like data isolation, encrypted storage, and audit logging.
What about regulation?
Mitigating these risks doesn’t mean avoiding AI, it means using it responsibly. But that isn’t something boards can do in a vacuum; to do it well, they also need to understand the evolving regulatory context.
Global AI regulation is catching up, and boards need to be aware of emerging obligations:
- EU AI Act: Now in force, the EU AI Act is the world’s first comprehensive AI regulation. It places clear rules on high-risk systems (such as those affecting employment, credit, or safety) and introduces transparency rules for general-purpose AI models like ChatGPT. Some obligations have already come into effect.
- UK approach: The UK has taken a less prescriptive stance. Rather than legislating immediately, the government has issued guidance asking existing regulators (like the Financial Conduct Authority) to interpret AI risks within their own domains. However, calls for stronger rules are growing, particularly in financial services and critical infrastructure.
- US state laws: In the absence of federal regulation, US states are passing their own AI laws. Colorado’s SB 24-205, passed in May 2024, requires organisations to conduct impact assessments, notify consumers of AI use, and take steps to prevent algorithmic discrimination.
Boards should review their technology risk registers, identify where AI is in use, and ensure compliance with relevant frameworks. For high-risk use cases, consider conducting a Fundamental Rights Impact Assessment (FRIA) — a tool used in Europe to evaluate how AI systems might affect individuals’ rights and freedoms. The Council of Europe offers a useful guide.
How can AI be used by board directors?
Given regulators’ increasing focus on AI, and the limitations and risks we’ve highlighted above, you might think that AI is inherently flawed and has no place in your boardroom or your board pack. Well, that’s only half right.
The crux of the matter is that generative AI cannot yet be trusted as an author of board material, or to exercise the careful judgement that is expected of directors. It can, however, shine as a critical thinking partner and editor, where its immense capabilities to analyse and rephrase text can nudge directors and report writers in the right direction and act as an on-call sparring partner.
For example, from a director’s perspective, AI can help with:
1. Summarising long reports
This is perhaps the most obvious application for boards: feed in a long board report or policy document and ask the AI to generate a summary. If the tool has been designed by engineers who understand how boards work and what they need, you could ask it to flag anything requiring board-level attention or pick up points that speak to a particular theme or board priority.
2. Spotting gaps or bias
If your AI tool knows what makes an effective board pack, and the insights your directors need, it can spot the blindspots in analysis and recommendations. For example, if a proposal hasn't outlined alternative options or considered the impact of a plan on key stakeholders.
3. Scanning for risks and opportunities
Point AI at your board pack (and external data, too) to highlight emerging risks or opportunities. It can categorise them by committee relevance and surface issues before they become urgent.
4. Producing minutes and actions
When the board meeting is over, use AI to produce action lists and create a solid first draft of your meeting minutes, improving accuracy and saving your governance team time.
Get fit-for-purpose, properly formatted minutes with an AI tool that’s been built with governance professionals, by board experts, and keeps your data private at all times.
Find out moreHow can AI be used in board packs?
These examples are just a starting point. In addition to saving time and easing the preparation burden for directors and governance teams, AI can also be used to fix the inputs: to improve the quality of board papers and ensure the information being served up to directors is rich in relevant insight. This is something that boards desperately need, with only 36% of directors thinking their board pack adds value and 68% scoring their board materials as “weak” or “poor”.
To explore the art of the possible, and identify where AI can help, let's dive a bit deeper into the two core components of the board pack writing process: critical thinking and great writing.
1. Stimulating critical thinking with AI
Great board papers don’t just share information; they surface breakthrough insights and game-changing ideas, and they stimulate both the author and the reader’s thinking.
AI can guide your report writers to deliver this, by letting them know if they’re falling into the most common board reporting and critical thinking traps and nudging them towards more actionable insights and plans. For instance:
- Does the content of your report link to the organisation’s big picture?
- Are you providing insight, or just information?
- Are you sufficiently forward-looking in your analysis?
- Are you sharing information candidly or massaging the message?
When issues are flagged, it’s up to the report author to decide how to act on that feedback. For example, maybe the bad news hasn’t been omitted and the report looks overly optimistic simply because last quarter was great. As the subject matter expert, the writer will know better than the AI. What matters is that the AI challenged the thought process and pointed out potential gaps in the writer’s thinking, helping produce a more robust paper and sharper insight for the board to act on.
Board Intelligence’s AI-powered management reporting software, Lucia, provides real-time prompts and feedback that challenge your team to think more deeply as they write.
2. Guiding great writing with AI
Robust thinking is only half the battle, however, and great information won’t do much good if it isn’t easy for the reader to process it. Here, too, AI can help by asking questions that drive better communication:
- Has an executive summary been included?
- Are sentences short or long and rambling?
- Is the vocabulary simple or overly long and full of jargon?
Because generative AI excels at modifying existing text and “translating” it from one representation into another, it can help fix the issues it spots with the click of a button. For example, it can extract the report’s key points and put them into a best practice executive summary.
Questions every board should ask about AI
To use AI effectively and responsibly, boards need to ask the right questions about it. We’d recommend asking the following five questions:
- Are we using AI in our board reporting? If so, where are we using it and what are we trying to achieve?
- How are we validating its outputs?
- Who’s accountable for errors or misuse?
- What’s our plan for model monitoring and retraining?
- Are we keeping a record of prompts and outputs for audit purposes?
Governance here matters. AI can help the board make better decisions, but only if it’s used with transparency and oversight.
Next steps: how to bring AI into your board pack
To start using AI in your board pack, you don’t need to overhaul your whole process. Start small:
- Choose one or two areas where AI can save time, like summarising reports or drafting meeting minutes.
- Use tools that are purpose-built for board materials, with the right security and audit controls.
- Ensure human review remains part of the process to catch issues early and build trust in the outputs.
Board Intelligence’s AI tools are built for this. Take Lucia, our AI-powered report writing tool, for example. Its design is rooted in best practice reporting methodology, the QDI Principle, which means it helps management prepare board materials that hit the mark. And it’s designed with the controls boards expect, so it delivers the insight that directors need in a format that's easy to read and engage with, and secure.
Want to try it for yourself?
Find out more about our AI tools or book a demo to see how Board Intelligence's AI-powered board management and board reporting tools can set your board up to succeed.
A thinking and writing platform that helps you to write brilliantly clever and beautiful reports that surface breakthrough insights and spur your business to action.
Find out more