And What Leaders Must Do Now to Turn It Around
The uncomfortable truth
The life sciences industry is experiencing a collective awakening about artificial intelligence. Board meetings feature AI on every agenda. Budgets are being reallocated. Pilots are launching across clinical, quality, and regulatory functions. The energy is palpable, the investment substantial, the expectations sky-high.
And yet, something is deeply wrong.
A recent MIT study delivered a stark finding: 95% of generative AI pilots fail to deliver measurable business impact. The research, conducted by MIT’s Project NANDA (July 2025), analyzed over 300 public AI implementations, conducted 52 organizational interviews, and surveyed 153 senior leaders. Despite enterprise investment of $30 to $40 billion globally, only 5% of AI initiatives are generating measurable returns. (MIT Project NANDA, “The GenAI Divide: State of AI in Business 2025”)
This is not an isolated finding. The pattern is consistent across research performed by multiple organizations:
- Gartner (2025): 60% of AI Projects will be abandoned due to lack of AI-ready data
- McKinsey (2025): Only 39% of survey respondents report EBIT impact from AI at the enterprise level
- S&P Global (2025): 46% of AI projects are scrapped between proof of concept and broad adoption
The message is clear: whether you measure by ROI, production deployment, or project completion, the vast majority of AI initiatives are falling short of their promise.
In the life sciences, the stakes are even higher. We operate in an environment where regulatory scrutiny is intense, where validation requirements add complexity to every deployment, and where the margin for error in clinical and manufacturing contexts is essentially zero. If general enterprise AI has a 70 to 95% failure rate, what happens when you layer on GxP requirements, FDA expectations, and EU AI Act compliance?
The failure rate in regulated industries may be even higher because we have been solving the wrong problem from the start.
The paradox of high activity and low outcomes
Here is what makes this failure rate so puzzling: life sciences organizations are not sitting idle. According to industry surveys, nearly 60% of pharma executives say they have moved beyond ideation to actively building AI use cases. Investment is flowing. Pilots are multiplying. Vendor partnerships are forming.
Yet outcomes remain elusive.
The MIT study identified a critical factor they call the “learning gap.” The biggest problem was not that AI models were not capable enough. Instead, organizations did not understand how to use AI tools properly or how to design workflows that could capture benefits while minimizing risks. As one CIO quoted in the study put it: “We’ve seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects.”
Accenture’s research reveals a troubling disconnect that helps explain this gap: while 94% of employees say they are ready to learn AI skills, only 5% of organizations are providing training at scale. (Accenture, 2024). You cannot adopt a transformative technology without transforming how your people work.
MIT’s separate Project Iceberg research, conducted in partnership with Oak Ridge National Laboratory, quantifies what many of us sense intuitively: the visible disruption from AI is just the tip of the iceberg. Their analysis reveals that 11.7% of the U.S. workforce, representing approximately $1.2 trillion in wage value, is exposed to automation through cognitive and administrative work that spans every industry, not just technology. (MIT Project Iceberg, 2025). The transformation potential is vast, but most organizations are not positioned to capture it.
The 10-20-70 reality
Boston Consulting Group has studied thousands of AI implementations to understand what separates success from failure. Their findings should fundamentally reshape how we think about AI transformation.
BCG’s 10-20-70 principle reveals that:
- 10% of AI success comes from algorithms and models
- 20% of AI success comes from data quality and technology infrastructure
- 70% of AI success comes from people and processes
(BCG, “From Potential to Profit: Closing the AI Impact Gap,” January 2025)
Read that again. The technology itself, the algorithms that generate so much excitement and vendor marketing, accounts for only 10% of whether an AI initiative succeeds or fails. The remaining 90% comes from data foundations and, overwhelmingly, from people and processes.
Yet where do most organizations focus their attention? On tool selection. On model evaluation. On feature comparisons between competing platforms. We have been obsessing over the 10% while neglecting the 70% that actually determines success.
BCG’s research also found that only 5% of companies are successfully achieving bottom-line value from AI at scale, while 60% report minimal gains despite substantial investment. The elite organizations, which BCG calls “future-built” companies, generate 1.7 times higher revenue growth and 1.6 times greater EBIT margins than their struggling peers. The gap is widening, not narrowing. (BCG, “The Widening AI Value Gap” September 2025)
The five reasons AI adoption fails in life sciences
Understanding why AI fails is the first step toward ensuring your initiatives succeed. In our work across dozens of life sciences organizations, we have identified five recurring patterns that derail AI adoption.
1. Generic AI in GxP environments
Most AI solutions are built for general enterprise use. They do not understand the nuances of GxP compliance, validation requirements, or regulatory expectations. When you deploy generic AI into a regulated environment, you create friction at every turn.
Quality teams raise concerns about audit trails. Validation specialists struggle to apply traditional protocols to probabilistic systems. Regulatory affairs is concerned with how to document AI-assisted outputs. The result? Pilots that never progress to production. Initiatives that stall in endless review cycles. Technology investments that deliver a fraction of their potential.
The MIT study found that companies buying specialized, vendor-led AI solutions succeeded approximately 67% of the time, while internal builds succeeded only about 33% of the time. Life sciences need AI that is designed for GxP from the ground up, not retrofitted after the fact.
2. The data foundation problem
The industry talks enthusiastically about AI potential, but the financial reality is sobering. While 88% of organizations report regular AI use in 2025, McKinsey’s latest research reveals that only 39% have achieved a measurable impact on EBIT. This performance gap is largely attributed to “Data Debt,” which is the structural burden of managing siloed and inconsistent information that prevents AI from delivering enterprise level value.
In life sciences, this challenge is amplified. Data sits in GxP validated systems that cannot be easily modified without compromising regulatory standing. Information flows through fragmented supply chains involving CDMOs and CROs, while decades of acquisitions have created incompatible architectures. Without a unified data fabric, AI models lack the clean, contextual information required to produce reliable results.
The consequence of neglecting these fundamentals is significant. Gartner predicts that through 2026, organizations will abandon 60% of AI projects that are unsupported by AI-ready data (Gartner, February 2025). A robust data foundation is not a secondary consideration; it is a prerequisite for AI success.
3. Validation complexity paralysis
Life sciences organizations are trained to be cautious. Regulatory consequences for system failures can include warning letters, consent decrees, and product recalls. This caution is appropriate and necessary.
But when it comes to AI, appropriate caution often becomes paralysis. Organizations cannot figure out how to validate AI systems using traditional approaches designed for deterministic software. Questions multiply: How do you validate a system that produces different outputs for the same input? How do you document training data? How do you handle model drift?
Without clear answers, organizations delay. And delay. And delay. The MIT study found that large enterprises take nine months on average to scale AI pilots, compared to just 90 days for mid-market firms. Meanwhile, competitors who have figured out risk-based approaches to AI validation move ahead.
4. The skills gap nobody wants to acknowledge
Most life sciences employees do not know how to work with AI effectively. They do not know how to prompt. They do not know how to verify outputs. They do not know how to integrate AI into their existing workflows. And most organizations are not investing in fixing this gap.
Remember the Accenture finding: 94% of employees want to learn AI skills, but only 5% of organizations are providing training at scale. This gap is not just an HR problem. It is the primary reason AI initiatives fail to scale beyond pilot groups of enthusiasts.
The MIT study uncovered a fascinating pattern: a “shadow AI economy” has emerged in which workers from over 90% of companies report using personal AI tools like ChatGPT for work, even when official enterprise programs stall. Individuals are crossing the divide with flexible tools while organizational initiatives struggle. This tells us the problem is not employee resistance. The problem is how organizations deploy AI.
5. No clear business case
“We need AI” is not a strategy. Yet that is exactly how many initiatives begin. A competitor announces an AI partnership. A board member asks about AI plans. A vendor delivers an impressive demo. And suddenly, the organization is launching AI projects without clear use cases tied to measurable outcomes.
BCG found that leading companies pursue, on average, only about half as many AI opportunities as their less advanced peers. They focus on depth over breadth, prioritizing an average of 3.5 use cases compared to 6.1 for struggling companies. The MIT study reinforced this: investment patterns reveal misaligned priorities, with sales and marketing capturing 50% of AI budgets, even though back-office automation often yields higher returns.
Without clear outcomes, AI projects become science experiments. Interesting, perhaps educational, but unlikely to deliver business value.
The five fixes that actually work
Understanding the problems is necessary but not sufficient. Here is what successful organizations do differently.
Fix 1: Start with GxP-ready AI
Do not try to retrofit generic AI for regulated environments. Start with solutions designed for GxP from the ground up. This means pre-validated governance frameworks, risk-based validation approaches that align with Computer Systems Assurance principles, and audit trails built in from day one.
Look for partners who understand the unique requirements of life sciences, not just AI vendors who happen to have life sciences customers. The MIT data shows vendor-led, specialized solutions succeed at twice the rate of internal builds.
Fix 2: Build your data foundation first
Before launching AI pilots, assess your data readiness honestly. Identify the gaps. Create a roadmap to improve data quality, accessibility, and governance. This is not glamorous work, but without it, every AI initiative is built on sand.
BCG’s research emphasizes that 20% of AI success comes from data and technology. Investing in this foundation pays dividends across every AI initiative, not just the first one.
Fix 3: Adopt risk-based continuous validation
Computer Systems Assurance (CSA) principles apply to AI. Embrace risk-based approaches that focus on intended use and critical thinking rather than exhaustive scripted testing. Build monitoring, drift detection, and ongoing assurance into your operating model.
The goal is not to avoid validation but to validate intelligently, allocating effort based on risk rather than applying the same approach to every system regardless of its criticality.
Fix 4: Invest in people first
This is the 70% that determines success. Real AI enablement means:
- Role-based training tied to specific outcomes, not generic AI overviews
- Hands-on practice with real workflows, not theoretical exercises
- Verification skills to catch AI hallucinations and errors
- Psychological safety to experiment and fail without career consequences
- Champion development to create internal advocates who train their peers
At USDM, we have seen what happens when you invest in people first. Recruiters who once spent 60 minutes per resume now complete the same work in 5 minutes, and then voluntarily train their colleagues to do the same. Champions emerge when you create the conditions for them to succeed.
Fix 5: Start small, think big
Do not try to transform everything at once. Pick 2-3 high-value use cases with clear ROI. Prove value. Build credibility. Then expand. The organizations succeeding with AI are not the ones with the biggest budgets. They are the ones who have mastered focused execution.
BCG found that leading companies expect more than twice the ROI in 2024 that other companies do, and they successfully scale more than twice as many AI products across their organizations. Focus creates momentum. Momentum creates results.
Build AI enablement into your 2026 budget
As you finalize 2026 planning, remember: the organizations that win the AI race will not be those that spend the most on technology. They will be the ones that invest most wisely in their people, and have the courage to identify and redesign their processes and workflows. Success in 2026 requires moving beyond ‘bolting on’ tools to existing tasks and instead rewiring the organization to be AI-native from the ground up.”
The gap between AI leaders and laggards is widening. BCG reports that future-built companies plan to spend 26% more on IT and dedicate 64% more of their IT budget to AI in 2025. They are reinvesting their early gains to pull even further ahead. For organizations falling behind, the window to catch up is closing.
If you are not sure where to start, if you need help identifying the right use cases, building the right skills, or creating the right culture, that is exactly the conversation we have with clients every day. Reach out! Let us talk about what AI transformation could look like for your organization in 2026. And continue your learning at the USDM Summit 2026, where industry and life sciences leaders come together to move beyond AI pilots and share practical strategies for scaling compliant AI with real business impact.
Explore these USDM case studies and business outcomes:
- AI-Powered Quality Management for Life Sciences
- Streamlining Clinical Trials with the LLM Protocol Assistant
- Leveraging AI for Enhanced Clinical Trial Data Management in Life Sciences
- Complaint Processing Enhanced by AI in Medical Device Manufacturing
Blog References
- MIT Project NANDA. “The GenAI Divide: State of AI in Business 2025.” August 2025.
- S&P Global. “Generative AI experiences rapid adoption, but with mixed outcomes” 2025.
- McKinsey. “The State of AI: Global Survey” 2025.
- Boston Consulting Group (BCG). “From Potential to Profit: Closing the AI Impact Gap.” January 2025.
- BCG. “AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value.” October 2024.
- Accenture. “Work, Workforce, Workers: Reinvented in the Age of Generative AI.” 2024.
- MIT Project Iceberg (MIT and Oak Ridge National Laboratory). “The Iceberg Index.” 2025.
- Gartner. “Lack of AI-Ready Data Puts AI Projects at Risk.” February 2025.
