In July 2025, MIT’s Project NANDA released its study The GenAI Divide: State of AI in Business 2025. The report revealed a sobering reality: despite global investments of $30–40 billion, “95% of organizations are getting zero return” from GenAI initiatives (Executive Summary, p.2). Only 5% of GenAI pilots deliver measurable business impact.
At first glance, this may sound shocking. But should it be?
I would argue — not really. In fact, the 95% failure rate mirrors decades of data from traditional IT projects. Reports such as the well-known CHAOS Report have consistently shown that most IT projects either fail or deliver only partial success. So while the scale and hype around GenAI are unprecedented, the underlying causes of failure are deeply familiar. I dare to predict that the findings of the MIT study will remain relevant in the years to come, unless organisations follow these insights and apply the tried-and-tested steps explained in this post.
High Adoption, Low Transformation
MIT’s research — based on 300+ AI implementations, 52 executive interviews, and 153 survey responses — finds that most organizations enthusiastically explore GenAI, but very few implement it successfully.
“Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact.” (p.3)
The study highlights a paradox:
- 80% of organizations investigated General-Purpose LLM´s and 60% embedded or Task-specific GenAI (p.6)
- But only 5% of enterprise-grade AI solutions ever reach production (p. 6–7).
Enterprise-grade AI often fails not because of technical limitations but because of what the study calls a “lack of fit with existing workflows” (Section 3, p. 4).
Why the Failure Rate Is Not Surprising
Only 5% of GenAI pilots deliver measurable business impact. This mirrors long-standing issues in IT project delivery. The Chaos Report has shown for decades that only about a third of IT projects succeed fully; the rest are late, over budget, or abandoned entirely. Organizational change, unclear objectives, and workflow misalignment are usually to blame.
Thus, enterprise grade GenAI is not exceptional because it fails more often — it’s exceptional because we expected it not to.
Workflow Misalignment: A Classic Problem in New Clothes
The MIT study states that custom GenAI tools often break down due to “brittle workflows, lack of contextual learning, and misalignment with day-to-day operations” (p. 3).
This is not unique to AI. The same happens when implementing ERPs, CRM systems, or HR software:
- Organizations adopt software before defining the underlying processes, leading to chaos.
- Users resist tools that don’t align with their daily practices.
“Generic tools like ChatGPT are widely used, but custom solutions stall due to integration complexity and lack of fit with existing workflows.” (Section 3, p. 4)
In other words, AI doesn’t fail due to technology — it fails because no one wrote down how work is actually done.
Defining Processes Is Not Enough — You Need Requirements
Only 5% of GenAI pilots deliver measurable business impact. Defining business workflows is only the first step to make your enterprise GenAI project successful. Organizations must go further and define precise requirements for GenAI — what outputs it must produce, how quality is measured, how it integrates with existing systems, and what ROI is expected.
This requires tight collaboration between:
- Future users
- Chief AI Officers (CAIOs)
- Legal and compliance teams.
Unfortunately, many organizations skip or postpone requirements engineering, especially when buying off-the-shelf AI products that require customization.
This is disastrous — both economically and legally. Without a clear specification:
- You cannot measure ROI.
- You cannot enforce performance or liability against vendors.
The Methodology Trap: “We’re Agile” Is Not a Strategy
Only 5% of GenAI pilots deliver measurable business impact. The MIT study notes that most projects stall not because of lack of money or talent, but because tools “do not retain feedback, adapt to context, or improve over time” (p. 3, 10)
This is partly due to poor project management.
While agile methods can improve outcomes, they are often used as an excuse to skip documentation and requirements. True agile requires:
- More resources, not fewer (e.g. daily stakeholder involvement, sprint reviews)
- Clear definitions of ‘done’
- Iterative validation against business goals
When organizations claim to be “agile” but avoid structure, AI projects become endless prototypes.
Missing Contracts, Missing Accountability
Another reason for failure — rarely mentioned in technical discussions — is the absence of proper project contracts.
A solid AI project contract should inter alia define (please note that this not an exhaustive list):
- Project goals and KPIs
- Deliverables and milestones
- Roles and responsibilities of all parties
- Data usage rights and IP ownership
- Liability, auditability, and regulatory compliance
- Change management procedures
Too often, organizations start pilots based on slides, demos, and emails — but without a legally binding agreement. This undermines accountability and makes disputes almost impossible to resolve.
The Shadow AI Economy — Employees Are Moving Faster Than Organizations
One of the most fascinating insights from the MIT study is the rise of “shadow AI”:
“Employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.” (Section 3.3, p. 8).
- 90% of employees in surveyed organizations use AI tools personally (p. 8)
- Only 40% of organizations have official AI subscriptions (p. 8)
This suggests that individuals have already crossed the GenAI divide — institutions haven’t.
It’s Not About the Model — It’s About Learning and Memory
A central insight from MIT is that model quality is not the main barrier. The real issue is learning.
According the study, current AI tools:
- Don’t remember past interactions
- Can’t learn from feedback
- Need full context provided every time
As one user said:
“It repeats the same mistakes and requires extensive context input for each session.” (Section 4.3, p. 13)
This is why AI is trusted for quick tasks (email drafts, summaries), but not for high-stakes work. According to the study, 70% of users prefer AI for simple tasks, but 90% still prefer humans for complex projects (p. 13).
What Successful AI Projects Do Differently
95% of organizations are getting zero return. The 5% of organizations that do succeed share certain characteristics:
According to the MIT study (Sections 5–6, p. 14–19):
✔ They customize AI to specific workflows and if necessary update outdated workflows
✔ They demand systems that learn and improve over time
✔ They partner with external vendors rather than build alone
✔ They involve frontline workers — not just the IT department
✔ They measure success using business outcomes, not just accuracy scores.
This aligns strongly with best practices in classical IT governance and Business Processing Outsourcing (BPO) management.
External Perspectives and Supporting Research
Other reputable sources reinforce these findings:
- McKinsey (2025) confirmed that companies “are beginning to take steps that drive bottom-line impact—for example, redesigning workflows as they deploy gen AI and putting senior leaders in critical roles, such as overseeing AI governance.” (Mc Kinsey, The state of AI – How organizations are rewiring to capture value, March 2025)
- The 2020 MIT Sloan Management Review/Boston Consulting Group report found that only 10% of companies report significant financial benefits from AI.
(Sam Ransbotham, Shervin Khodabandeh, David Kiron, François Candelon, Michael Chu, and Burt LaFountain, Expanding AI´s Impact with Organizational Learning, October 2020)
Conclusion
The GenAI Divide is real — but it is not inevitable.
To bridge it, organizations must rediscover proven principles of successful IT delivery:
- Define processes before buying tools
- Create clear, measurable requirements
- Use agile — but properly
- Establish robust contracts and governance
- Treat AI as a business project, not a tech experiment.
Ultimately, GenAI will not reward those who move fastest — but those who move thoughtfully, contractually, and collaboratively.
Are you planning to introduce enterprise grade GenAI? If so, check out my legal advice for operators and users of AI systems or better still, get in touch with me.