Beyond the Pilot: What CEOs Need to Own About AI in 2026

AI has been on CEO agendas for years, but there’s been a noticeable shift over the past year. The conversations I have with boards and fellow CEOs in the US and Canada are no longer about whether AI matters, but about how to prove its value and govern it effectively.

Recent research has found that 72% of CEOs now say they are the primary decision makers on AI in their organizations, which is double the share from the prior year. Additionally, roughly half of those CEOs believe their role is at risk if they don’t get AI right. That aligns with what I’m hearing: AI has quietly become a career-defining responsibility for many of us, particularly in highly regulated sectors like insurance, healthcare, and the public sector.

At the same time, companies expect to more than double AI investment this year, moving from less than 1% of revenues in 2025 to around 1.7% in 2026. Boards and shareholders are rightly asking: what are we getting for that spend beyond demos and slide decks?
In this environment, task level AI experiments aren’t enough. If you’re leading an insurer, a health system, a payer, or a government agency, you’re being asked to treat AI not as a technology project but as a core part of your strategy, operating model, and governance. That’s the shift I want to dig into in this article.

A promotional graphic with the title "What CEOs Need to Own About AI in 2026" and a photo of Michael Carr, President & CEO of Naviant.

What CEOs Need to Own About AI in 2026

From “AI Projects” to CEO Level Ownership

For most of the past decade, AI and automation lived in the CIO, CTO, or Chief Digital Officer’s portfolio. These leaders ran pilots, hired a few data scientists, deployed chatbots at the edges, and reported “innovation” to the board, with the implicit assumption being that AI was primarily a technology initiative.

But that assumption isn’t accurate anymore. When research shows that CEOs themselves are now the main decision makers on AI, and that many of them see their tenure tied to AI outcomes, you can feel the accountability moving to the top of the house.

This is because the questions that determine AI success are no longer technical questions, but rather cover broader business issues like:
• Which customer journeys, claims flows, or revenue cycles are we willing to redesign?
• What risk posture will we accept around data, privacy, and model behavior?
• How do we want work to be divided between people and AI agents over the next three to five years?

These are strategy and operating model choices, and they can’t be answered by a vendor selection committee.

As CEO, you’re being asked to set the direction and define the guardrails to your board, your regulators, your customers, and your people.

Followers, Pragmatists, and Trailblazers: Where Do You Sit?

In their 2026 AI Radar, BCG introduced a useful way to think about CEO approaches to AI: Followers, Pragmatists, and Trailblazers:
• Followers tend to run isolated experiments at the edge of the business but lack a clear enterprise narrative or operating model impact.
• Pragmatists are serious and ROI driven. They pursue pilots with clear value hypotheses, especially in cost and efficiency, but often stay in a use-case-by-use-case mindset.
• Trailblazers treat AI as a lever to redesign workflows and business models end to end. To them, that end-to-end transformation goes beyond local efficiency wins to be the main opportunity. Upskilling is a notable priority, too, with Trailblazers devoting 60% of their AI budget to workforce development, a significant figure compared with pragmatists’ 27% allocation. They also spend over 8 hours per week learning and using AI themselves.

It’s worth reflecting on not only which description feels the closest to your organization now, and more importantly, which direction are you moving in the next 12–24 months?

Why Task Level AI is No Longer Enough

Across insurance carriers, healthcare organizations, and state and local agencies, a common pattern is emerging: teams can point to a half-dozen AI success stories, but when the board asks how AI fits into the enterprise and long term strategy, the answer is much harder to articulate.

Typically, it looks like this:
• An AI assistant helping analysts draft reports.
• A chatbot answering routine member or citizen inquiries.
• An AI-powered workflow in accounts payable to read invoices and route exceptions.

These are useful improvements, but they don’t meaningfully change how claims move end-to-end, how a prior authorization is handled from intake to clinical review to notification, or how a permit is processed from application to issuance. They also don’t fundamentally change cost structure, cycle time, or experience at the enterprise level

World Economic Forum has argued that as AI agents become “real, on‑demand collaborators,” the hard part is no longer execution but orchestration. In other words, if you want AI to make a true, lasting impact, you can’t just add more AI tasks. Instead, you need to rethink the workflow itself, designing how work, decisions, and accountability flow across humans and agents.

This, more so than any technology shortfalls, is where many AI investments are failing to live up to the hype. Strategy, data foundations, and change management can’t be overlooked.

The CEO Agenda for Agents and End to End Workflows

To make 2026 AI investment count, I believe CEOs need to own four specific shifts.

1. Shift from tools to workflows

Instead of asking “Where can we add an AI element?” start asking, “Which workflows should we redesign end‑to‑end with AI at the center?” The point is to treat AI as an integral part of how work flows across functions, not as a bolt on widget inside one department.

2. Elevate AI agents to the operating model level

AI agents are like digital team members that handle multi-step tasks under defined rules. McKinsey’s work on agentic AI shows that organizations achieve the greatest gains when these agents are built into workflows rather than layered on top.

For a CEO, the key question needs to be “Where in our operating model should agents take on defined responsibilities, and what governance do we need around them?” That includes decisions about which approvals agents can handle, which data they can access, and how human oversight works in practice.

3. Tie every initiative to measurable outcomes

With AI spend doubling, boards will expect proven, measurable impact from AI initiatives. To meet that demand, every significant AI initiative should be anchored to a small set of metrics the board cares about, varying by industry, like:
• Unit cost and productivity.
• Cycle time and backlog reduction.
• Risk and error rates.
• Customer, patient, or citizen experience.

4. Build governance and trust into the design

In the age of agentic AI, trust and risk management need to be built into the architecture through supervision, observability, and clear human agent collaboration patterns.

In regulated environments, this is non negotiable. Data privacy, bias, explainability, and regulatory requirements are not afterthoughts. In reality, they shape which workflows you choose, how you design your agents, and what guardrails your teams must follow.

Setting the expectations of what “responsible AI” looks like in your context is a CEO’s job.

Your teams can operationalize the details, but they shouldn’t be guessing your risk appetite.

Navigating Workforce Anxiety Without Stalling Progress

The other reality we have to own is workforce anxiety. Headlines about AI and layoffs are everywhere, and some high‑profile companies have cited AI as a factor in job cuts, and employees hear these stories.

At the same time, as I briefly touched on already, leading research and forums continue to point to upskilling and redeployment as essential to capturing AI’s upside. World Economic Forum has repeatedly called out the need for large‑scale reskilling as AI changes the nature of work, and BCG’s data shows that Trailblazer CEOs are investing significantly more of their AI budgets into workforce development.

The data points in one direction: the companies sustaining AI momentum are the ones investing in their people, not just their platforms.

In practice, that might mean, where feasible, committing to “upskilling and redeployment first, redundancy last.” It means funding learning programs as a connected part of AI investments. But it also means being transparent about where AI is eliminating work, where it is augmenting roles, and how you plan to support people through the transition.

As CEOs, we can’t promise that every job will be untouched. But we can choose whether AI feels like something done to employees or with them.

The Conversations CEOs Need Next

According to BCG’s research, 90% of CEOs anticipate that AI will completely redefine the benchmarks for industry success by 2028.

That’s not far away. The decisions we make in the next six to twelve months will determine whether we’re leading that transformation or reacting to it.

Now’s the time to have the right conversations:
• Meet with your board to discuss risk and ambition
• Meet with your leadership team to examine your workflows and operating model
• Keep an open dialogue with your employees about skills and trust

But it’s also vital to consult with a trusted partner in outcome‑driven, agentic automation and AI that understands how to redesign work in regulated industries with AI agents in the loop.

For us at Naviant, this is exactly what we do. If you’re wrestling with your next steps, send us a comment or question in the chat below – we’re glad to have that conversation.

More from The Naviant Blog

Business Process and Automation Insights

Two people wearing lanyards smile and talk in an office setting; one man is holding a laptop and sticky notes are visible in the foreground.
Modern cityscape at sunset showing glass office buildings, busy highways with light trails from cars, and clear sky in the background.