Agentic AI systems act. They don't wait to be prompted. They initiate processes, make decisions within those processes, and pursue goals autonomously. The pace of adoption makes the governance implications urgent: a PwC survey of 300 senior executives found that 79% say AI agents are already being adopted in their companies, and 88% plan to increase AI-related budgets in the next 12 months because of agentic AI. For governance leaders, that raises pressing questions about accountability, oversight, and risk.
Recently, Board Intelligence brought together cyber security and governance leaders to explore what agentic AI means for organisations and how boards and governance teams should respond. The conversation, chaired by Anna Scholes, Head of Advisory at Board Intelligence, drew on the expertise of Eddie Lamb, Global Head of Cyber at Hiscox; Ted Cowell, Head of UK Cyber Business Incident Response at S-RM; and Jonathan Knight, Chief Product Officer at Board Intelligence.
Three themes defined the discussion.
1. Shadow adoption is already happening
The first thing to understand is that agentic AI isn't coming. It's here.
"You can almost guarantee that people in your organisation are building and using agentic tools today," said Jonathan Knight. "Some of it will be well supervised and under control. Some of it won't."
This is what practitioners call shadow adoption: individuals picking up AI tools and using them outside official channels. The motivations are often entirely reasonable — for example, someone wants to streamline a workflow or improve a customer experience. The outcome, however, can expose the organisation to risks it hasn't assessed or even identified.
Similar risks apply even when officially sanctioned tools are used to build internal workflows. A governance professional who builds an agent to work through board pack tasks may see it as low risk: it's their workflow, their data, they understand it. But useful tools rarely stay contained. Once shared more widely, an agent built without failure protocols, audit trails or full and broad testing is operating at scale in ways its creator never anticipated, by people who don't understand it and in ways the organisation never approved. That is when the risk becomes real.
For those looking to use AI for board reporting in a governed, purposeful way, our guide to 3 ways you can use AI to write your board papers is a practical starting point.
2. Decision accountability gets complicated fast
When an AI system makes a decision or initiates an action, who is responsible for the outcome? It's a question that regulators, lawyers, and governance professionals are still working through. It matters, because boards are already struggling with decision-making quality. According to the Board Value Index, the average confidence level in boardroom decision-making is just 30.8 out of 100. The biggest barriers directors point to are poor-quality information and unclear roles and responsibilities. Agentic AI, without the right controls, makes both worse. Agents can generate and surface information faster than directors can verify it. In some environments, the gap is even more acute.
Eddie Lamb was direct about the exposure: "You can imagine the governance issue that arises when you retire a model that's made tens of thousands of process decisions. How do you then piece together the audit trail for how you ended up in that situation?"
In regulated industries, where individual decisions can be scrutinised years later, that's not a hypothetical concern.
Board Intelligence's practical response to this risk is to ensure every AI use has a named human owner. As Jonathan Knight explained: "We're not yet at the stage where we allow AI to do something completely autonomously and just go on and on making changes. We believe in human review and clear accountability at every stage."
Building human oversight into the workflow, rather than adding it afterwards, is what responsible adoption looks like. Three foundations matter most: clear audit trails that record what was done by AI and who owned the process; failure protocols that define what happens when an agent produces an unintended outcome; and explicit human ownership of every AI-enabled process.
3. Starting small is a strategy, not a delay
The organisations that get the most from agentic AI are not the ones waiting for certainty. They're the ones experimenting now, in controlled environments, learning what works, and expanding their use from there. The goal is to capture value without exposing themselves to risks they haven't assessed.
"Start with something that perhaps isn't going to blow your business up if it goes wrong," said Eddie Lamb. "Experiment, learn, and grow."
Boards that wait for certainty before acting are taking a risk of their own. Agentic AI is fundamentally reshaping how organisations operate and what it takes to compete. Eddie Lamb was unambiguous: "If you want to survive, your digital workforce will be agentic based. You're going to have to learn to use it."
Boards that want to govern this well need to move from observation to active oversight. That means getting comfortable with a risk-based approach: understanding the specific exposure of each use case, setting a clear appetite, and defining the conditions under which experimentation is permitted to expand. It also means using AI at the board level, so that directors can build hands on experience with the technologies shaping their organisations.
What boards should be asking
For governance professionals assessing AI tools, Jonathan Knight suggested three areas of focus.
The first is information security. The same questions your IT team asks of any system apply here, with some additional ones specific to AI. How could this model fail or be attacked? How is training data handled? What are the access controls? Our definitive guide to AI for the board pack covers the key risks and how to assess them.
The second is scope and use case. Is the AI being used for something appropriate? Extracting data for a human to read, with clear grounding so the reader knows where the information came from, is a sound use case. The scrutiny required rises significantly when AI is being asked to drive consequential decisions without human review. And will it stay that way? Scope creep is a real danger, often the initial user understands how to use an AI tool safely in their own domain but the same tool applied in a different context by a different person can expose different risks.
The appropriate level of human oversight depends on context. Ted Cowell offered an example: S-RM uses agentic AI within its cyber fusion centre to help manage security risks at machine speed. Some threats move too fast for human intervention to be the first line of response. In those cases, agentic AI, deployed responsibly, is part of the answer.
The third is education. Directors and governance professionals who understand what agentic AI can and can't do are better placed to push back when something doesn't seem right and to ask the questions that protect the organisation. That understanding is still developing at the most senior levels, which is itself a governance risk. Our guide on how high-performing directors prepare for board meetings with AI offers a practical starting point for closing that gap.
The governance role is to get ahead
Governance professionals are not bystanders in this transition. They are uniquely placed to shape how their organisations navigate it. That means treating AI governance as a live agenda item and insisting on audit trails, failure protocols, and human ownership before AI systems go anywhere near consequential decisions.
A useful starting point is understanding where your board currently stands. Board Intelligence's AI Readiness Radar is a free tool that assesses how well your board's frameworks, practices, and processes are adapted for the AI age. It takes a few minutes and gives you a clear view of where to focus first.
With Insight Driver, surface the questions worth asking, the stakeholders worth considering, and the context worth knowing.
See Insight driver
