
On March 10, 2026, Amazon used Enterprise Connect in Las Vegas to launch four AI capabilities for Amazon Connect: Predictive Insights, Assistant for Managers, Testing and Simulation, and Conversational Analytics for Email. Individually, each one fills a known gap. Collectively, they amount to something bigger. Amazon is arguing that most contact centers are measuring the wrong things, managing AI and humans through different lenses, and deploying automation without adequate pre-launch testing. That argument has direct implications for any CX leader currently evaluating platform strategy, which is why InflectionCX, the Unified CX company, is flagging this as a critical signal. Understanding what Unified CX actually means is now essential context for parsing what Amazon is really selling here.
Pasquale DeMaio, VP of Amazon Connect, put it bluntly at Enterprise Connect, as reported by BCStrategies: companies should focus on engagement rather than containment. Amazon's own marketing materials framed the position even more sharply, stating that most companies still measure AI success by how many calls it deflects. The Predictive Insights feature, currently in Preview, is designed around a different premise. It pulls signals from browsing behavior, previous chat transcripts, and prior call history into a unified data layer. That data feeds real-time personalization at the point of contact. The goal is to shape an interaction before the customer even initiates it.
This is not a minor product update. This is a vendor telling you that the KPI your board has been tracking for 3 years is incorrect.
The Metrics Shift in Context
Amazon is not alone in making this case. Zoom made a parallel argument at Enterprise Connect 2026 with its "resolution economy" framing, as CX Today reported on March 10, challenging the same deflection-led metrics that have dominated contact center measurement since the first IVR went live. But there is an important difference in how each vendor gets there. Zoom's pitch is largely conceptual, a redefinition of what success should mean. Amazon's pitch is architectural: a specific data layer feeds a specific personalization engine, which produces a specific outcome at the point of interaction.
The broader industry is moving in this direction. As UC Today noted in its coverage of Enterprise Connect, the conversation around agentic AI has matured from capability demonstrations to questions about governance, trust, and control. Futurum Group's analysis of Enterprise Connect 2026 observed that conversations at the event increasingly questioned traditional metrics such as call deflection and average handle time, emphasizing outcome-based measures instead.
For buyers evaluating CCaaS platforms, the implication is clear. The metrics your vendor optimizes around will shape what your platform actually does. If your vendor still treats deflection as the north star, every AI investment will be engineered to push customers away from live agents. If your vendor optimizes for resolution quality and relationship continuity, the AI layer behaves differently at a foundational level.
The Governance Problem Nobody Mentioned
What Amazon did not address on March 10 is who governs this unified data layer and how it is governed. Predictive Insights pulls behavioral signals from across the customer journey. Browsing data, chat transcripts, call history, and purchase patterns. That is a significant concentration of customer intelligence inside a single vendor's infrastructure.
Three of the four announced features remain in Preview. That means production-readiness, SLA commitments, and data handling specifics are still undefined for most of what Amazon showed. Buyers who move on this now are making architectural bets on capabilities that do not yet carry enterprise-grade guarantees.
The Assistant for Managers feature introduces a natural-language interface that queries both AI and human-agent performance in a single pane. That is operationally useful. It is also an abstraction layer between supervisors and raw data. When a manager asks the system which queues are at risk of missing SLAs, the system returns an answer and a recommended action. The question CX leaders need to ask: what happens when the recommendation is wrong, and who in your org can verify the underlying data independently?
The Testing and Simulation capability, now Generally Available, is the most operationally mature of the four releases. It allows teams to define customer profiles, expected responses, and queue conditions, then run automated test flows before go-live. This addresses a real gap. But simulation quality depends entirely on how accurately you can model your actual customer population. If your test personas do not reflect your real customer base, your simulation gives you false confidence. Understanding how to structure a proper implementation matters more than the tool itself.
Amazon's all-you-can-eat AI pricing model, which ties cost to channel usage rather than AI consumption, removes one barrier to adoption. It also creates a different risk. When AI usage carries no marginal cost, there is no natural friction preventing over-automation. The pricing model incentivizes maximum AI deployment. Without strong internal governance frameworks, that incentive structure can push organizations toward automating interactions that should involve a human, as outlined in InflectionCX's analysis of vendor dependency risk in CCaaS environments.
Buyer Guidance
If you are evaluating Amazon Connect or any platform making similar claims, here is what to pressure-test before your next vendor conversation.
Ask your vendor to define, in writing, how they measure AI success. If the answer includes deflection rate as a primary KPI, you are looking at a platform optimized for cost avoidance rather than customer outcomes. That distinction will compound over every quarter you operate on it.
Demand a data governance map for any feature that aggregates cross-channel customer data. Predictive Insights is powerful in concept. But power without transparency creates liability. Your compliance, legal, and IT security teams need to understand what data flows where, who controls it, and what happens to it when you leave the platform. This is especially critical for organizations operating under GDPR, state-level privacy laws, or industry-specific data residency requirements.
Run your own simulation before trusting the vendor's. Testing and Simulation is a welcome addition. But the value of any simulation is bounded by the quality of the inputs. Build test scenarios from your actual customer data, not idealized personas. Involve frontline agents in scenario design. They know where the real failure points are.
So What
For the CIO or CTO: Audit your current KPI framework against the metrics your CCaaS vendor actually optimizes for. If there is a mismatch between what your board tracks and what your platform incentivizes, you have a strategic misalignment that no feature release will fix. Do this before your next contract renewal.
For the VP of Customer Experience: Convene a cross-functional review of any vendor feature that aggregates behavioral data across channels. Predictive personalization requires predictive governance. If your compliance and security teams have not reviewed the data flows behind these capabilities, pause adoption until they have.
For the Contact Center Director: Use Testing and Simulation immediately, but build your own test library grounded in real operational failure patterns rather than vendor-supplied templates. The gap between a demo scenario and a production edge case is where CX breaks.
More in Analysis
About InflectionCX
Strategy, technology, and operations from a single partner.
InflectionCX runs contact centers with human agents and AI agents inside one operating system. We handle the technology, the people, and the operations. You get lower costs, tighter compliance, and better outcomes.


