Strategy

NEDs and AI: What Boards Need to Know

The AI Governance Crisis: Why Most Boards Are Getting It Spectacularly Wrong

Published:

02.04.25

Picture this scene: You're sitting round the boardroom table listening to yet another enthusiastic presentation about artificial intelligence. The slides look impressive, the business case seems solid, and everyone's nodding along. But here's what nobody wants to say out loud: whilst 78% of organisations now use AI somewhere in their business, independent research by MIT NANDA suggests that up to 95% of these initiatives deliver absolutely no measurable impact to the bottom line.

That's not a typo. Ninety-five percent.

The gap between AI hype and AI reality has become a chasm that's swallowing billions in shareholder value. And as a board director, you're ultimately responsible for making sure your organisation doesn't become another casualty.

When AI Goes Spectacularly Wrong

Take Zillow's cautionary tale. Their algorithm-powered house-flipping programme lost $881 million in 2021 because their model couldn't adapt when the pandemic turned property markets upside down. The technology worked exactly as designed. The problem was that nobody had thought to design proper governance for when things went sideways.

Now contrast that with JPMorgan Chase. They're running 450 AI applications in production, generating somewhere between $1 billion and $1.5 billion in annual value, with plans to scale to 1,000 use cases by 2026. Their COIN system alone has automated document analysis that used to consume 360,000 hours of lawyer time each year.

What's the difference? It wasn't the technology. It wasn't the size of their AI budget. It was governance.

JPMorgan didn't start by asking "what cool things can AI do?" They asked "what expensive manual processes are killing our productivity?" They built rigorous oversight from day one. They scaled systematically, learning from both successes and failures.

The lesson is uncomfortable but clear: AI governance isn't something you can delegate to the technology committee. It's a strategic imperative that determines whether AI becomes your competitive advantage or your most expensive mistake.

The Numbers That Should Worry Every Director

Here's a statistic that kept me awake last night: only 31.6% of S&P 500 companies disclosed any form of board-level AI oversight in their 2024 proxy statements. Think about that for a moment. We're dealing with technology that could fundamentally reshape entire industries, and fewer than a third of major companies have formal board governance for it.

This isn't just a compliance gap. It's creating genuine competitive moats. BCG's latest research shows that AI leaders are expecting 60% higher revenue growth and 50% more cost reduction than companies that are struggling to get AI right. These aren't marginal differences. These are the kind of performance gaps that decide who survives the next decade.

Yet most boards are still treating AI like previous technology rollouts. The predictable result? Organisations chase shiny new tools instead of solving real business problems. They skimp on data quality because it's boring. They ignore the human side of the equation because it's complicated.

What Successful AI Governance Actually Looks Like

The organisations getting this right have figured out three fundamental principles:

First, they start with business strategy, not technology capability. Goldman Sachs didn't rebuild their entire operational platform around AI because it sounded exciting. They did it because they identified specific bottlenecks where manual processes were constraining growth. The result? Twenty percent productivity gains across their development teams.

Microsoft took a similar approach with their "Customer Zero" strategy. They deployed Microsoft 365 Copilot across their own organisation first, using themselves as the test case before rolling it out to customers. This wasn't about proving the technology worked. It was about understanding where it created real value versus where it just created busywork.

The common thread is this: successful AI starts with clear business problems, not technology solutions. Every AI initiative should have a simple answer to the question "what specific business problem does this solve, and how will we measure success?"

Second, they've built new approaches to risk management. AI creates risks that don't fit into traditional frameworks. Algorithm bias isn't just an ethical concern anymore. It's a legal liability. Just ask iTutorGroup, which had to pay $365,000 to settle discrimination charges because their AI recruiting tools were systematically filtering out older candidates.

Then there's model drift, where AI systems that work perfectly at launch gradually become unreliable as market conditions change. Most boards don't even know this is a risk, let alone have processes to monitor for it.

The good news is that risk management frameworks are catching up. ISO 42001:2023 provides the first comprehensive international standard for AI management systems. The US National Institute of Standards and Technology has published detailed guidance on managing AI risks, particularly around large language models that can generate convincing but completely false information.

Smart boards are establishing clear trigger points: hallucination rates above 1% on customer-facing systems, performance drift exceeding 5% week-over-week, or any bias test failures should prompt immediate executive review.

Third, they're not waiting for regulation to catch up. The EU AI Act is already in force, with the first prohibited practices taking effect in February 2025. Fines can reach €35 million or 7% of global turnover, whichever is higher. In the US, the Securities and Exchange Commission has already started charging companies with "AI washing" for making false claims about their AI capabilities.

But successful organisations aren't treating compliance as a checkbox exercise. They're building regulatory readiness into their governance frameworks because they know it's cheaper to get it right from the start than to retrofit compliance later.

The Workforce Challenge Nobody Wants to Discuss

Let's address the elephant in every boardroom: jobs. The World Economic Forum's latest projections suggest that by 2030, roughly 92 million roles will disappear whilst 170 million new ones emerge. That's a net gain of 78 million jobs, but it masks enormous disruption in between.

Half of organisations already identify skills gaps as their biggest barrier to AI adoption. Nearly 40% of current job skills are expected to change fundamentally by 2030. This isn't a future problem. It's happening right now.

The organisations handling this well are reframing AI as augmenting human capabilities rather than replacing people wholesale. They're investing heavily in reskilling programmes. They're creating new hybrid roles that combine human creativity with AI efficiency.

This isn't just about being socially responsible, though that matters. It's about competitive advantage. The companies that handle workforce transformation well will attract better talent, maintain higher productivity, and avoid the disruption costs that come with poor change management.

Your Quarterly AI Oversight Framework

Effective governance needs systematic measurement. Here's what you should be asking for every quarter:

Value metrics that matter: P&L impact by use case, return on investment compared to original business cases, time from pilot to production deployment, and honest assessment of competitive positioning.

Risk indicators you can act on: Model performance degradation, incidents where AI systems have produced false or biased outputs, progress on regulatory compliance requirements, and any security incidents involving AI systems.

Strategic readiness measures: Your ability to attract and retain AI talent, progress on internal capability development, dependency on third-party AI providers, and assessment of threats from AI-native competitors.

This isn't about micromanaging the technology team. It's about having the information you need to make informed strategic decisions.

The Next Twelve Months: A Practical Roadmap

In the next 90 days, demand a comprehensive review of your organisation's AI activities. Not the glossy presentation about potential applications. A hard-nosed assessment of what's actually working, what's consuming resources without delivering value, and where the gaps are in governance and risk management.

Over the next six months, establish proper governance structures. This probably means full board oversight initially, with the option to delegate to specialised committees as your capabilities mature. It definitely means developing risk management frameworks based on recognised standards like ISO 42001 and NIST guidance. And it means starting serious workforce planning based on realistic projections about how AI will change your industry.

Within twelve months, you need systematic value tracking, focused investment in the AI applications that actually move the needle on your core business processes, and credible regulatory compliance programmes. Most importantly, you need to be thinking beyond efficiency gains to genuine business model transformation.

The Choice That Defines Your Organisation's Future

We're watching the emergence of a two-tier economy. The 5% of companies already generating real value from AI are becoming tomorrow's market leaders. The other 95% are facing a rapidly closing window to catch up.

This isn't about becoming technology experts or chasing every AI trend that comes along. It's about establishing the governance discipline that separates sustainable competitive advantage from expensive experimentation.

The most effective boards embrace their strategic oversight role whilst resisting the temptation to second-guess implementation details. They establish clear frameworks, demand rigorous metrics, and hold management accountable for results.

The organisations that get AI right aren't just implementing new technology. They're reimagining what their businesses can become in an AI-driven world.

The window for establishing AI leadership is still open, but it won't stay that way for much longer. The governance decisions your board makes over the next year will determine whether your organisation prospers or struggles in an economy increasingly defined by intelligent machines.

The time for lengthy deliberation is over. The time for governance leadership is now.

Tags:

#strategy

#planning

#governance

More from Our Experts

Practical perspectives and proven strategies from the experts driving AI transformation across industries.

Join Our Mailing List

Join Our Mailing List

Join Our Mailing List

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch

The Innovation Experts

Get in Touch