Security and trust
How does trust impact AI adoption? And what vendors can do to turn it into a competitive advantage

Chris Peake
Chief Trust Officer
Published on: April 15, 2026

I still wholeheartedly believe trust is an enabler of a successful AI strategy, not a hindrance. But as AI innovation accelerates, I’m seeing a paradox emerge: High interest in AI, but investments frequently stall due to trust concerns.
To find out why, we surveyed more than 2,000 business and technology leaders across the U.S. and U.K.* We uncovered an AI trust deficit that’s hindering adoption of certain AI-powered solutions among mid-sized and large organizations.
Let’s dig deeper into the data to talk about why AI deployment stalls, what companies sacrifice when that happens, and what can be done about it.
Why enterprise AI adoption stalled despite massive investment
Execs feel the pressure. 87% of revenue and sales leaders report a top-down push from their boards and CEOs to implement AI (Gartner, 2025).
In practice, 88% of companies reported regular AI use, but only one-third of those said that they’ve begun to scale their AI programs (McKinsey, 2025). And, in our survey, nearly six out of ten leaders have delayed, paused, or canceled an AI deployment because of trust concerns.
This isn't a question of capability or budget. It’s a measurable trust barrier that sits squarely between AI pilots and full production deployment.
What drives enterprise distrust in AI systems
Enterprise leaders recognize AI's potential, but share three core concerns that create an adoption barrier:
1. Data privacy and security concerns
Data privacy ranked first among executives' major concerns at 34%. Leaders are understandably worried about their information.
Training AI on sensitive data is risky. Without proper guardrails, organizations face breaches and compliance gaps. AI systems can "remember" and potentially expose training data, putting customer records, financial information, and intellectual property at risk.
2. Lack of AI explainability and transparency
Explainability means understanding how and why AI made specific recommendations. The need for greater clarity here emerged as a significant blocker to trust. Leaders pointed to difficulty understanding how AI arrives at its outputs (30%) and a lack of vendor transparency (28%) as top concerns.
Opening this black box is critical. Otherwise, you could face significant compliance and liability risks. For example, imagine failing to explain to the board why the AI predicted a specific deal outcome.
3. Insufficient governance and oversight frameworks
AI is new enough that many enterprises are working to define and implement formal governance processes, like deployment authority, data access controls, and decision review processes. This governance gap between IT security and the unique needs of AI keeps many projects stuck in the pilot phase.
Our survey data corroborated this: More than 60% of leaders said explainability and safeguards were among their prerequisites for trust. That means trust isn’t simply a feeling; it’s explainability, safeguards, and guarantees.
How trust issues directly impact revenue outcomes
75% of leaders we polled feel their businesses are missing out on the transformational gains of AI due to a lack of trust in it. That means trust goes beyond just an IT concern. It’s a revenue issue that impacts win rates and sales cycle length.
The more leaders feel they can trust AI, the more confidently they can invest in their companies’ futures, and the faster the technology can deliver compounding benefits based on real use.
Because context-aware AI is built on real customer interactions, it empowers everyone across the business. In your revenue org, it makes your sellers more efficient, uncovers insights for leaders, and helps CROs quickly identify the crucial actions needed to increase growth.
This is how AI trust becomes your greatest competitive advantage in both what you sell and how you sell it.
With the Gong Revenue Graph as your trusted data foundation, you can flag trust-related deal risks using AI Deal Monitor. You can also identify when trust and security themes are trending across your deals with AI Theme Spotter.
What enterprises need to trust AI at scale
Innovation waits for no one. Keeping up with every new AI trend and its safety implications is impossible. But governance should not be a barrier to progress. Instead, it must be a set of guardrails that empower your team to scale responsibly.
Enterprises need a framework that progresses from basic compliance to full operational trust. That starts with:
- Clear custody: Defined limits on AI decision-making authority.
- Complete transparency: Full visibility into data sources and decision logic, plus third-party audits.
- Proven security standards: Adherence to recognized frameworks like SOC 2 and ISO.
Responsible AI practices have to be a fundamental part of every decision you make. But you can’t hit pause, and you can’t hold back. Throttling the technology will kill your forward momentum, but guardrails will steer you toward consistent and predictable growth.
Building an AI operating system with trust as a foundation
At Gong, we are committed to helping customers navigate this proactively. We continuously strive to address CIO and CISO concerns by being transparent about potential risks and by embedding flexible controls.
Our goal with the Gong Revenue AI Operating System is that trust isn't a bonus — it’s the foundation. We're cutting down on the potential risks by giving customers the full picture: Total transparency on the AI models, how they learn, and what data they crunch.
We leverage enterprise-grade security and governance as no-fail guardrails, making sure proprietary data stays protected. When you can trust the platform, adoption isn't a struggle. It's a launchpad for innovation, supercharged efficiency, and more predictable growth for your whole business.
*Methodology
The research was conducted by Censuswide, among a sample of 2,056 business leaders at medium and large businesses across the U.S. and U.K. The data was collected between January 6, 2026, and January 9, 2026. Censuswide abides by and employs members of the Market Research Society and follows the MRS code of conduct and ESOMAR principles. Censuswide is also a member of the British Polling Council.

Chief Trust Officer
Executive leader with over 25 years in Security and Information Technology, who successfully scaled multiple hyper-growth SaaS organizations past the $1B ARR mark. Expertise lies in advising on enterprise risk, regulatory compliance, and embedding security into product lifecycles to enable secure AI adoption.
Win more with Gong
Loading form...
Discover more from Gong
Check out the latest product information, executive insights, and selling tips and tricks, all on the Gong blog.


