BCG Says 70% of AI Transformation Is People and Process. Here's What That Actually Looks Like.
TL;DR
- BCG's 10-20-70 framework is right: 70% of AI transformation is people and process, not technology. But most coverage of the report focuses on the 10% (algorithms) and ignores the 70%.
- "Human oversight" isn't a single job. It's three distinct roles — review-and-approve, exception handling, and quality calibration — each requiring different tooling, staffing, and metrics.
- The 30–40% handling time reduction is real, but that time shifts from "processing documents" to "reviewing AI decisions." If the review job isn't well-designed, you've just created a different bottleneck.
- For mid-market carriers, transformation starts with embedded document AI, not enterprise-wide value chain redesign.
BCG released their Executive Perspectives report on AI-first P&C insurers in March, and the insurance world noticed. The headline findings are hard to argue with:
- AI spending as a share of revenue will triple in 2026
- Only 38% of P&C insurers are realizing AI value at scale — a gap that mirrors the broader disconnect between AI ambition and action
- The potential impact is enormous: $35–60 billion in operating cost reduction for the US market alone
The report is genuinely excellent — and it validates what practitioners have been saying for years. If you're a CEO or board member trying to understand why AI matters to P&C economics — not just efficiency — it's required reading. (It also explains why so many AI pilots never make it past the proof-of-concept stage.)
But there's a gap between the consulting view and the operational reality. And it sits right in the middle of BCG's most important insight.
The 70% That Nobody's Talking About
BCG's 10-20-70 model is the most useful framework in the report. It says that algorithms account for just 10% of a successful AI transformation. Technology and data are 20%. The remaining 70% is people and process — roles, workflows, change management, governance.
Every piece of coverage I've seen on this report focuses on the 10% (the agentic AI vision) or the 20% (data ontologies and platform architecture). The 70% gets a nod — "yes, the human element matters" — and then the conversation moves on.
This is exactly backwards.
The 70% is where AI transformations actually break down. Not because leaders don't know it matters, but because nobody has a clear picture of what that work actually looks like at the operator level.
"Human Oversight" Isn't One Job — It's Three
BCG's report mentions "human oversight" or "human judgment" dozens of times. Their workflow diagrams show a clean pattern: AI agents execute by default, humans intervene by exception.
On a slide, that's elegant. On a Monday morning in an underwriting operation — where 22% of senior underwriters are retiring by 2026 and the remaining team is already stretched — it's incomplete.
What we see with carriers is that "human oversight" actually breaks down into three distinct roles — and each one needs different tooling, different staffing, and different metrics:
1. Review-and-Approve
The human validates AI outputs before they move downstream. Did the system extract the right data from the submission? Did it classify the risk correctly? Is the automated document processing output clean enough to act on?
This is high-volume, structured work. The key metric isn't accuracy — it's throughput at a quality threshold. You need an interface that lets reviewers move fast when the AI is right (which should be most of the time) and flag issues quickly when it's not.
2. Exception Handling
The human makes the calls AI can't — or shouldn't — make. Ambiguous submissions. Conflicting signals. Cases where the policy language doesn't map cleanly to the data.
This is lower-volume, higher-judgment work. It requires domain expertise and context that the AI doesn't have. The key metric here is decision quality, not speed. And the humans doing this work need to understand what the AI considered and why it escalated — which means explainability isn't optional.
3. Quality Calibration
The human teaches the system to get better over time. Reviewing edge cases, confirming or correcting AI decisions, building the feedback loops that make next month's model better than this month's.
This is the role that almost everyone forgets. It's not in BCG's workflow diagrams. It doesn't show up in the "exception-driven escalation" model. But it's the difference between an AI system that compounds value and one that slowly drifts. (If you're wondering whether your operation has this right, here are five signs to check.)
The Handling Time Shift
BCG projects a 30–40% reduction in underwriter active handling time. That's a real number — we've seen it.
But here's what matters: that time doesn't disappear. It shifts.
The underwriter goes from spending 45 minutes processing a submission — pulling documents, entering data, cross-referencing — to spending 15 minutes reviewing what the AI already processed.
That's a massive improvement. But only if the 15-minute review job is well-designed. If your HITL layer drops the underwriter into a clunky interface with no context on what the AI did or why, you haven't saved time. You've just created a different kind of frustration.
The teams that get this right design the human job around the AI output. The reviewer sees what was extracted, what the confidence level was, and where the system flagged uncertainty. They approve with one click when it's right and drill into specifics when it's not.
The teams that get it wrong build the AI and bolt on the human layer as an afterthought. Then they wonder why adoption stalls and the trust gap never closes.
The Mid-Market Reality
There's one more thing the BCG report doesn't address: scale assumptions.
Their vision assumes enterprise-scale data ontologies, custom AI platforms, dedicated agent operators, and Chief AI Officers. That's the right playbook for a top-25 carrier with a $500M+ technology budget.
But the majority of P&C insurers aren't building proprietary underwriting decision engines. They're trying to get intelligent document processing working reliably inside the workflow their underwriters already use.
For these organizations, the path to AI-first doesn't start with reimagining the value chain end-to-end. It starts with getting the document layer right — embedded in the tools people already work in, not bolted on as another system to learn.
BCG's build-versus-buy framework actually acknowledges this: buy where differentiation is limited, build where it drives competitive advantage. For most carriers, document processing is a buy. Risk selection and pricing are where you build.
Where BCG Stops and the Real Work Starts
The BCG report is a wake-up call for insurance leadership. The strategic case is airtight: structural pressures are real, the window is narrowing, and incremental AI won't cut it.
But for the operators, workflow designers, and technology leaders who have to make this real — the work starts where BCG's diagrams end. It starts with the question their report never asks:
What is the human actually doing at their desk, and is it designed to work?
That's not a strategy question. It's an operations question. And it's the one that determines whether the 70% delivers or doesn't.
Want to see what well-designed human oversight looks like in practice? See it in action →