How to Build an Intelligent Automation Center of Excellence: Operating Model, Governance, and Scaling Framework

Most organizations have plenty of automation experiments but fall short on results.

We see it especially clearly in McKinsey’s latest global AI survey, which found that 88% of organizations are using AI somewhere in their business, but only about one-third have scaled it at the enterprise level. And while 39% of organizations attribute “any level” of EBIT impact to their AI use, McKinsey also notes that for most of these respondents, the actual financial impact is currently less than 5%.

This data confirms that while AI use is broadening, most organizations have not yet embedded the technology deeply enough to realize significant enterprise-wide financial benefits.

And although it can be easy to question the technology involved, the core of the issue lies in organizations’ operating models.

An intelligent automation Center of Excellence (CoE) is how leading organizations are beginning to close that gap, turning scattered pilots into a durable, scalable enterprise capability.

Now, let’s get into the details on how you can build your CoE, from the structural decisions and governance requirements to the funding models and risk controls that’ll help you create a solid program. But first, let’s take a step back.

Why You’re Stuck in Pilot Purgatory

If you’ve run successful pilots but can’t seem to scale, you’re not alone. McKinsey describes this as the core paradox of enterprise AI right now: experimentation is nearly universal, but scaled impact is rare.

50% of organizations cite competing priorities as their top barrier to automation adoption, and only about one-third actively work to align technology investments with broader business objectives. That misalignment has the predictable result of scattered bots, “shadow IT” automations, no coherent intake process, and no clear owner when things go sideways.

The symptoms look familiar across industries:

  • Government agencies running scripts and bots built in individual departments, with no statewide standards or oversight
  • Health systems with promising prior authorization or revenue cycle pilots, but no clarity on whether IT, clinical operations, or the business owns scale-up
  • Payers with multiple claims bots and no end-to-end view of automation risk, value, or dependencies
  • Commercial enterprises with RPA projects embedded in IT or finance, disconnected from strategic priorities and missing agreed success metrics

In every case, the failure point lies in the absence of an intentional operating model for intelligent automation, but that’s where your CoE comes in.

The Core Operating Model Decisions

Mandate and Scope: What Does the CoE Actually Own?

A high-functioning IA CoE starts with a clear mandate, and that mandate needs to be broad enough to matter. The CoE shouldn’t define itself around a single tool or platform. It should own shared capabilities across the automation lifecycle, which means defining two things upfront:

  1. Technology Scope: What’s in? Typically this includes RPA, intelligent document processing (IDP), workflow orchestration and ECM, AI/ML models embedded in workflows, and increasingly, agentic automation.
  2. Lifecycle Scope: The CoE should have a defined role in every stage, including idea intake, process discovery, solution design, development and testing, deployment, monitoring, value realization, and continuous improvement.

According to McKinsey’s research, there’s a consistent pattern among high performers: they treat AI as an operating model shift, not a set of tools, with senior leadership explicitly owning strategy, governance, and risk. So, your CoE mandate should reflect that same framing.

Structure: Centralized vs. Hub-and-Spoke

Once the mandate is clear, the structural question is how to organize delivery and governance. Three models dominate in practice:

  1. Centralized CoE: A single enterprise team owns standards, platforms, and delivery. Strong governance, but can become a bottleneck at scale.
  2. Hub-and-spoke (federated): A central hub sets standards and governs risk; business unit “spokes” co-own delivery. Balances consistency with local agility.
  3. Decentralized: Multiple local teams with light central coordination. Works in mature environments, but requires real governance discipline to avoid chaos.

The right choice depends on context:

For heavily regulated environments like government agencies and healthcare payers, centralized or hub-and-spoke with strong central governance is the norm. This is because regulators expect consistent controls and auditability.

For large health systems and multi-line insurers, hub-and-spoke often works best, giving clinical and line-of-business teams room to innovate while the hub manages platforms, standards, and risk.

Commercial enterprises with strong digital offices sometimes favor more federated models, but even they typically rely on a central group for architecture standards and AI risk governance.

Roles and Decision Rights

Successful CoEs clarify decision rights early. The practical structure spans three layers:

  1. Strategic layer: An executive sponsor (CIO, COO, or CDO) accountable for outcomes; a steering committee that includes business, IT, risk/compliance, and finance; and a value realization lead who owns benefits tracking and communication.
  2. Program layer: A CoE leader running the operating model day-to-day; a portfolio manager balancing quick wins, platform investments, and transformative initiatives; a governance/risk lead responsible for policy and AI risk controls; and a change management lead ensuring solutions actually get embedded in processes and roles.
  3. Delivery layer: Solution architects, process analysts, developers (RPA, IDP, ML specialists), citizen developer support in federated models, and operations support for monitoring and incident management.

The key is that someone owns both the build and the run. A CoE that can deploy but can’t maintain (or can govern but can’t deliver) won’t stay funded for long.

Governance, Intake, and AI Risk Controls

Governance Guardrails

Governance is one of the sharpest dividing lines between organizations that scale AI and automation safely and those that don’t. McKinsey also revealed that AI high performers are significantly more likely to invest in clear risk management frameworks, including defined owners for AI-specific risks.

An IA CoE should own or co-own guardrails across at least four domains:

  1. Architecture standards: Approved technologies, integration patterns, security baselines
  2. Process and control standards: How automations handle exceptions, validations, reconciliations, and audit trails
  3. Data and AI ethics: Principles for fairness, transparency, and data minimization when automations use AI models
  4. Compliance and risk management: Alignment with regulatory frameworks in healthcare, insurance, and public sector; coordination with internal audit and risk functions

A Repeatable Intake Process

One of the clearest anti-patterns in intelligent automation is ad-hoc intake, or projects initiated based on who shouts loudest rather than structured evaluation of value and risk. A repeatable intake process solves this.

It typically includes:

  • Standard idea submission templates capturing problem statement, volumes, data sources, stakeholders, and early value estimates
  • An initial triage by the CoE to filter out non-candidates, duplicates, and ideas misaligned with strategy
  • A quick feasibility and sizing step to estimate complexity, value, and dependencies

This will look different depending on your industry, too. In a health system, the CoE might run quarterly intake windows where clinical operations, revenue cycle, and HR submit ideas, which are then triaged jointly by IT, clinical leaders, and finance. On the other hand, in a state government context, a central CoE might collect submissions from multiple departments and cluster similar requests to design shared capabilities instead of one-off bots.

Portfolio Management

Scaling requires portfolio thinking. A well-run CoE consciously balances three categories: quick wins that build momentum and demonstrate value, capability-building initiatives that create platform leverage, and transformative bets that require board-level sponsorship and a longer time horizon.

A practical scoring model evaluates ideas across three dimensions:

  • Value: Financial impact and citizen/patient/member/customer impact
  • Risk and compliance: Regulatory exposure, model risk, and process risk
  • Feasibility: Data readiness, process standardization, and technical complexity

AI and Agentic Automation: the New Governance Frontier

As more automations embed AI models or agentic workflows, standard governance quickly stops being sufficient. McKinsey reports that 62% of organizations are at least experimenting with AI agents, and 23% are already scaling agentic AI in at least one function, and this changes the risk profile significantly.

Let’s take a look at some key elements your intelligent automation CoE should formalize for AI-powered automations:

  • Model risk management: Processes for approving AI models, validating them on representative data, monitoring for drift and performance degradation, and periodically re-validating.
  • Human-in-the-loop thresholds: Clear policies for when humans must review or override AI-driven decisions, especially in high-stakes domains like clinical care, benefits eligibility, claim denials, or credit decisions.
  • Incident management and kill-switches: Procedures for detecting anomalous or harmful behavior in automations or agents, quickly pausing them, and remediating impacts.
  • Accountability: Explicit assignment of responsibility between the CoE, business process owners, and risk/compliance functions for every AI-driven automation in production.

Still, it’s worth pointing out that vertical nuance matters greatly here:

  • Government CoEs must align with statutory and audit requirements around eligibility and adjudication.
  • Healthcare providers face additional safety and quality oversight when automation touches clinical processes.
  • Payers need rigorous fairness and explainability frameworks for AI-supported utilization management or claims decisions.
  • Commercial organizations must guard against bias and regulatory exposure in pricing, credit, and customer treatment.

Funding and Value Realization

Funding models

A CoE can’t be successful without funding, full stop. So, to determine what you might need, here are some common patterns to note:

  • Central Enterprise Budget: IT, digital transformation, or modernization funds the CoE to build shared capabilities, especially in the early years when the portfolio is still establishing its value story.
  • Chargeback or Showback: As the CoE matures, costs are allocated to business units based on usage, FTEs saved, or value delivered. This forces prioritization and reinforces business ownership.
  • Hybrid models: Central funding for platforms and governance, co-funding from lines of business for specific initiatives that generate line-of-business value.

What works tends to track with the vertical. Government CoEs are often funded through central modernization programs, sometimes with federal or grant support. Health systems typically route funding through capital committees with defined margin or payback thresholds. Payers and commercial firms more commonly use portfolio-based investment models tied to P&L impact.

The CoE Scorecard

Without metrics, CoEs are easy targets when budgets tighten. The best CoEs evolve from periodic, project-level benefit reviews to quarterly scorecards that give executives a balanced view of value, risk, and adoption.

A strong CoE scorecard covers four layers:

  • Operational: Hours returned to the business, throughput increases, error rate reduction, cycle time improvements
  • Financial: Net savings realized, cost avoidance, incremental revenue, margin improvement
  • Experience: Citizen/patient/member/customer satisfaction, employee engagement improvements
  • Risk and compliance: Exceptions, audit findings, regulatory breaches prevented or mitigated, adherence to AI governance policies

Publishing this scorecard quarterly turns the CoE into a transparent, measurable investment, rather than the dreaded black box that’s easy to defund.

Here’s what this looks like in practice, across a few different industries:

State government

A statewide intelligent automation CoE coordinates automation across departments, reduces manual workloads, and improves citizen services, anchored in shared platforms and strong governance.

The typical operating model involves a centralized CoE in a statewide IT or government operations office, with departmental champions driving adoption at the agency level. Funding flows from central modernization or digital transformation budgets. Metrics center on backlog reduction, processing time, error rates, and citizen satisfaction, often tied directly to public commitments or legislative mandates. AI use is subject to heightened governance and transparency requirements.

Healthcare providers

Intelligent automation in health systems often starts in revenue cycle or back-office operations, but increasingly extends into clinical workflows. A realistic operating model uses hub-and-spoke structure, a central CoE in IT or a system-level transformation office, with spokes aligned to major service lines or functions.

The mandate balances financial resilience (revenue cycle), clinician experience (reducing administrative burden), and patient safety. Governance is tight when automations touch clinical processes or patient data. Metrics track denial reduction, days in A/R, staff turnover, patient throughput, and clinician satisfaction.

Healthcare payers

Payers sit on significant automation potential across claims, provider data management, utilization management, and member services. The operating model typically features a centralized or hub-and-spoke structure with strong data governance and analytics support.

The CoE mandate centers on improving claims accuracy and speed, reducing administrative costs, and enhancing member and provider experience, all within tight regulatory and contractual constraints. AI-enhanced decisioning in fraud/waste/abuse or utilization management demands additional fairness, explainability, and appeals governance. Metrics cover first-pass claims resolution, cycle time, overpayment recovery, and compliance indicators.

Commercial enterprises

In manufacturing, financial services, and other commercial environments, CoEs often focus on order-to-cash, FP&A, supply chain, and customer service. Many organizations are evolving toward hub-and-spoke models with central standards and platforms and business-unit automation pods.

The operating model features a CoE anchored in a digital office, with embedded automation teams in large business units. Central architecture, security, and AI risk guidelines provide the guardrails, BUs operate with flexibility within them. Metrics focus on working capital, order accuracy, forecast quality, customer NPS, and productivity per FTE.

CoE Failure Patterns Worth Knowing About

Even well-resourced CoEs run into predictable problems. A few worth watching for:

  • The tool-first CoE defines itself around a platform (“we’re the RPA team”) rather than business outcomes. Fix: anchor the mandate in enterprise priorities and scope across multiple technologies.
  • Shadow IT proliferation happens when departments spin up scripts and bots outside the CoE, creating operational and compliance risk. Fix: build a unified operating model with clear intake and legitimate paths for citizen developers.
  • Political prioritization fills the pipeline with whoever has the most executive pull, not the most valuable work. Fix: appoint a portfolio manager, use transparent scoring, and maintain a balanced pipeline.
  • Short-term-only funding expects the CoE to pay for itself immediately through headcount reductions, leaving no room for platform or governance investment. Fix: adopt hybrid funding tied to a multi-year roadmap.
  • AI projects treated as science experiments bypass the CoE and create parallel, ungoverned platforms in the name of innovation. Fix: make AI and agentic automation explicitly in-scope, with shared platforms and risk controls.
  • Over-centralization turns the CoE into a bottleneck, slowing delivery and encouraging workarounds. Fix: evolve toward hub-and-spoke as maturity grows, empowering spokes within clear guardrails.

Your CoE Maturity Roadmap

IA CoE development isn’t a flip-the-switch moment. It’s a multi-year journey with distinct stages:

  • Ad-hoc: Isolated pilots, no formal CoE, inconsistent tools, no shared standards or metrics. Risk exposure is high and the ROI story is thin.
  • Emerging CoE: A small central team, basic governance, early intake and prioritization, and a focus on quick wins and a few shared platforms. This is where you define the mandate, choose a structural model, and establish metrics.
  • Scaled program: A mature CoE with enterprise platforms, portfolio management, and a balanced scorecard. AI and agentic use cases are starting to scale in production. AI risk governance deepens here.
  • Optimized/agentic: Automation and AI are embedded in end-to-end process transformation. Platform-based delivery, self-service patterns for business units, and agentic workflows orchestrating multi-step tasks.

Each stage requires changing both technology as well as governance, portfolio management, and culture. The operating model should explicitly describe how those shifts will happen over time.

If your organization recognizes itself in the “experimenting everywhere, scaling nowhere” picture, the next step is designing the operating model that lets your automation program grow and scale. If you have any questions regarding your agentic AI journey, your CoE program, or anything else, feel free to leave a comment or question in the chat below. We’d love to talk with you.

Kara Martin

As a Technology Content Specialist at Naviant since 2019, Kara Martin helps organizations make sense of emerging technologies and apply them to real-world business challenges. Her work focuses on intelligent automation, AI, and process improvement, translating complex research, trends, and use cases into practical insights leaders can actually use. Through her weekly articles, Kara bridges the gap between hyped-up tech jargon and measurable business outcomes, showing how technology delivers value when it’s aligned with people, process, and strategy.

More from The Naviant Blog

Business Process and Automation Insights

Two people wearing lanyards smile and talk in an office setting; one man is holding a laptop and sticky notes are visible in the foreground.
Modern cityscape at sunset showing glass office buildings, busy highways with light trails from cars, and clear sky in the background.