Skip to main content
SortSpoke Blog » Latest Articles

BCG Says 70% of AI Transformation Is People and Process. Here's What That Actually Looks Like.

TL;DR

  • BCG's 10-20-70 framework is right: 70% of AI transformation is people and process, not technology. But most coverage of the report focuses on the 10% (algorithms) and ignores the 70%.
  • "Human oversight" isn't a single job. It's three distinct roles — review-and-approve, exception handling, and quality calibration — each requiring different tooling, staffing, and metrics.
  • The 30–40% handling time reduction is real, but that time shifts from "processing documents" to "reviewing AI decisions." If the review job isn't well-designed, you've just created a different bottleneck.
  • For mid-market carriers, transformation starts with embedded document AI, not enterprise-wide value chain redesign.

BCG's 10-20-70 Model — Where AI Transformation Value Actually Lives Three horizontal bars animate to show that algorithms are 10%, technology is 20%, and people and process account for 70% of AI transformation effort. The 70% bar pulses to draw attention. Where AI Transformation Value Actually Lives Source: BCG 10-20-70 Model, March 2026 Algorithms 10% ← Where most coverage focuses Tech & Data 20% People & Process 70% ← Where transformation succeeds or fails "Human and organizational factors account for 70% of scaling challenges among insurers" — BCG

BCG released their Executive Perspectives report on AI-first P&C insurers in March, and the insurance world noticed. The headline findings are hard to argue with:

  • AI spending as a share of revenue will triple in 2026
  • Only 38% of P&C insurers are realizing AI value at scale — a gap that mirrors the broader disconnect between AI ambition and action
  • The potential impact is enormous: $35–60 billion in operating cost reduction for the US market alone

The report is genuinely excellent — and it validates what practitioners have been saying for years. If you're a CEO or board member trying to understand why AI matters to P&C economics — not just efficiency — it's required reading. (It also explains why so many AI pilots never make it past the proof-of-concept stage.)

But there's a gap between the consulting view and the operational reality. And it sits right in the middle of BCG's most important insight.

The 70% That Nobody's Talking About

BCG's 10-20-70 model is the most useful framework in the report. It says that algorithms account for just 10% of a successful AI transformation. Technology and data are 20%. The remaining 70% is people and process — roles, workflows, change management, governance.

Every piece of coverage I've seen on this report focuses on the 10% (the agentic AI vision) or the 20% (data ontologies and platform architecture). The 70% gets a nod — "yes, the human element matters" — and then the conversation moves on.

This is exactly backwards.

The 70% is where AI transformations actually break down. Not because leaders don't know it matters, but because nobody has a clear picture of what that work actually looks like at the operator level.

"Human Oversight" Isn't One Job — It's Three

BCG's report mentions "human oversight" or "human judgment" dozens of times. Their workflow diagrams show a clean pattern: AI agents execute by default, humans intervene by exception.

On a slide, that's elegant. On a Monday morning in an underwriting operation — where 22% of senior underwriters are retiring by 2026 and the remaining team is already stretched — it's incomplete.

What we see with carriers is that "human oversight" actually breaks down into three distinct roles — and each one needs different tooling, different staffing, and different metrics:

Three Modes of Human Oversight Three cards appear sequentially, each showing a different human oversight mode with its key metric, volume level, and skill requirement. What BCG Calls "Human Oversight" Is Actually Three Different Jobs Review & Approve Validating AI outputs before they move downstream VOLUME High — most transactions KEY METRIC Throughput at quality threshold SKILL LEVEL Trained reviewer Exception Handling Making the calls AI can't (or shouldn't) make VOLUME Low — edge cases only KEY METRIC Decision quality SKILL LEVEL Senior domain expert Quality Calibration Teaching the system to get better over time VOLUME Ongoing — sampled KEY METRIC Model improvement rate SKILL LEVEL Subject matter expert Each mode requires different tooling, different staffing, and different metrics.

1. Review-and-Approve

The human validates AI outputs before they move downstream. Did the system extract the right data from the submission? Did it classify the risk correctly? Is the automated document processing output clean enough to act on?

This is high-volume, structured work. The key metric isn't accuracy — it's throughput at a quality threshold. You need an interface that lets reviewers move fast when the AI is right (which should be most of the time) and flag issues quickly when it's not.

2. Exception Handling

The human makes the calls AI can't — or shouldn't — make. Ambiguous submissions. Conflicting signals. Cases where the policy language doesn't map cleanly to the data.

This is lower-volume, higher-judgment work. It requires domain expertise and context that the AI doesn't have. The key metric here is decision quality, not speed. And the humans doing this work need to understand what the AI considered and why it escalated — which means explainability isn't optional.

3. Quality Calibration

The human teaches the system to get better over time. Reviewing edge cases, confirming or correcting AI decisions, building the feedback loops that make next month's model better than this month's.

This is the role that almost everyone forgets. It's not in BCG's workflow diagrams. It doesn't show up in the "exception-driven escalation" model. But it's the difference between an AI system that compounds value and one that slowly drifts. (If you're wondering whether your operation has this right, here are five signs to check.)

The Handling Time Shift

BCG projects a 30–40% reduction in underwriter active handling time. That's a real number — we've seen it.

But here's what matters: that time doesn't disappear. It shifts.

Where Underwriter Time Goes — Before and After AI Two horizontal bars compare how an underwriter spends 45 minutes before AI versus 15 minutes after, showing the shift from manual processing to AI-assisted review. Where Underwriter Time Goes BEFORE 45 min per submission Processing docs Data entry Risk eval Decision AFTER 15 min per submission AI does this Review AI output Risk eval Decision ← This job must be well-designed The time doesn't disappear. It shifts from processing to reviewing.

The underwriter goes from spending 45 minutes processing a submission — pulling documents, entering data, cross-referencing — to spending 15 minutes reviewing what the AI already processed.

That's a massive improvement. But only if the 15-minute review job is well-designed. If your HITL layer drops the underwriter into a clunky interface with no context on what the AI did or why, you haven't saved time. You've just created a different kind of frustration.

What Good Review Looks Like

The teams that get this right design the human job around the AI output. The reviewer sees what was extracted, what the confidence level was, and where the system flagged uncertainty. They approve with one click when it's right and drill into specifics when it's not.

The teams that get it wrong build the AI and bolt on the human layer as an afterthought. Then they wonder why adoption stalls and the trust gap never closes.

The Mid-Market Reality

There's one more thing the BCG report doesn't address: scale assumptions.

Their vision assumes enterprise-scale data ontologies, custom AI platforms, dedicated agent operators, and Chief AI Officers. That's the right playbook for a top-25 carrier with a $500M+ technology budget.

But the majority of P&C insurers aren't building proprietary underwriting decision engines. They're trying to get intelligent document processing working reliably inside the workflow their underwriters already use.

For these organizations, the path to AI-first doesn't start with reimagining the value chain end-to-end. It starts with getting the document layer rightembedded in the tools people already work in, not bolted on as another system to learn.

BCG's build-versus-buy framework actually acknowledges this: buy where differentiation is limited, build where it drives competitive advantage. For most carriers, document processing is a buy. Risk selection and pricing are where you build.

Where BCG Stops and the Real Work Starts

The BCG report is a wake-up call for insurance leadership. The strategic case is airtight: structural pressures are real, the window is narrowing, and incremental AI won't cut it.

But for the operators, workflow designers, and technology leaders who have to make this real — the work starts where BCG's diagrams end. It starts with the question their report never asks:

What is the human actually doing at their desk, and is it designed to work?

That's not a strategy question. It's an operations question. And it's the one that determines whether the 70% delivers or doesn't.

Key Takeaways
1
BCG's 10-20-70 framework is right — 70% of AI transformation is people and process, not technology. But that 70% needs to be designed, not assumed.
2
"Human oversight" is three distinct jobs: review-and-approve, exception handling, and quality calibration. Each needs different tooling, staffing, and metrics.
3
AI doesn't eliminate handling time — it transforms it. The 30–40% reduction is real, but only if the new review job is well-designed.
4
For mid-market carriers, start with embedded document AI that works inside existing workflows — not enterprise-wide value chain redesign.
5
The gap between strategy and execution lives in the operational details that BCG's diagrams don't show. Ask: what is the human actually doing?

Want to see what well-designed human oversight looks like in practice? See it in action →

Commercial P&C Insurers Guide to Solving the Underwriting Bottleneck

guide-1

Related articles