TAKEAWAYS
According to research by MIT, enterprise investment in generative artificial intelligence (GenAI) has surged to an estimated US$30 to $40 billion globally. Yet, for most organisations, the payoff remains elusive.
The study, “The GenAI Divide: State of AI in Business 2025”, reported that only about 5% of GenAI pilots delivered measurable profit-and-loss impact, while 95% of organisations reported zero return on their GenAI initiatives.
It examined 300 public implementations across sectors and, the conclusion is stark – the gap between success and failure is widening, and the divide has less to do with model quality or regulation than with how organisations approach implementation.
In other words, GenAI is not failing enterprises; enterprises are failing GenAI.
The headline statistic – that 95% of GenAI initiatives fail – has travelled fast. But some experts caution against reading it too literally. Laurence Liew, Director of AI Innovation at AI Singapore, describes it as “a masterclass in misleading statistics”. Having observed multiple technology waves from open source to cloud computing, he argues that “early pilots aren’t supposed to deliver immediate profits”. Instead, “they’re supposed to build capability”. In his view, the same data that labels projects as failures often shows employees using AI tools daily. “The technology isn’t failing; enterprise implementation strategy is.” As a case in point, at AI Singapore, more than 70% of carefully scoped and properly resourced AI projects eventually move into production, he shares.
The MIT researchers echo this point. Their findings suggest that organisations stuck in “pilot purgatory” often underestimate the organisational changes required to scale AI. Those that succeed treat GenAI less as a plug-and-play tool and more as a long-term capability-building effort.
Beyond culture, many failures stem from a more basic problem, namely, unclear objectives.
“The most critical mistake organisations make is failing to define why they are using AI in the first place,” says Lem Chin Kok, CEO of GenAI firm AiRTS. Without clear business objectives, he argues, there is no way to assess whether a GenAI initiative has succeeded or failed.
Another major issue is not knowing the capabilities, strengths, and limitations of the AI models under consideration. “If an organisation does not understand these aspects, it cannot determine which model is best suited to achieve the defined objectives, or which vendor offers the most appropriate solution.”
Mr Lem adds that too often, enterprises adopt AI out of fear of missing out, deploying chatbots or copilots without a clear link to strategic priorities. The result is experimentation without direction, and pilots that never scale.
As GenAI moves closer to core business functions, issues of trust, security and compliance become harder to ignore.
According to Mr Lem, “enterprises must first define the regulatory obligations and business security needs that govern their operations. These requirements form the foundation for selecting and implementing AI solutions responsibly.”
“Next, organisations should evaluate the security features of AI models to ensure they align with both business objectives and compliance requirements. Understanding how the model handles data, privacy, and potential vulnerabilities is critical.” He warns that enterprises often rush ahead without fully assessing regulatory obligations, data security requirements or long-term technical implications. Choices between open-source and “black box” models, or between public cloud and on-premise deployment, carry trade-offs that can shape an organisation’s risk profile for years.
“AI adoption must be guided by objectives, not trends,” Mr Lem stresses. “If AI is truly strategic, organisations need to think about ownership, governance and long-term alignment from the start.”

While technology dominates headlines, many failures trace back to human capital.
Dr James Ong, Founder and Managing Director of Artificial Intelligence International Institute and co-author of AI for Humanity, argues that enterprises often deploy GenAI without understanding where the technology is heading, or its fundamental limitations. He points to gaps in data governance, particularly around data quality, ownership and suitability. Without reliable data foundations, even the most advanced models struggle to deliver value.
More broadly, Dr Ong believes organisations underestimate the need for what he calls “meta thinking” – the ability of leaders and users to reflect on how human expertise and AI can work together. Without this, GenAI risks being bolted onto existing workflows rather than transforming them.
From a workforce perspective, Arthoven Ng, Founder of Talent Developers Singapore, identifies three recurring gaps, namely, process redesign, change management and creativity in application. “Most organisations focus on job redesign, when AI needs to be applied at a system level,” he says. Adding AI tasks to existing roles, rather than rethinking processes, often increases workload instead of reducing it. He also highlights psychological barriers. Employees worry about career safety and whether sharing AI expertise might make them redundant. In such environments, AI adoption happens quietly and unevenly, limiting organisational learning.
One of the most cited barriers to scaling GenAI is employee resistance. However, Jasmine Liew, Organisation Development Director at training consultancy Breakthrough Catalyst and a doctoral candidate in Psychological Safety and Change Initiatives, believes that employees are not resisting the change itself but the losses that come with the change. “When people are asked to adopt a new AI tool or integrate AI into their work, they have to first unlearn and let go of their familiar skills and a sense of mastery,” she points out.
Ms Liew argues that many GenAI programmes fail because they are framed as top-down initiatives designed to benefit the organisation rather than the employees, who are the users. Employees, she says, are far more receptive when AI is positioned as a way to reduce pain points – such as repetitive, time-consuming tasks – rather than as another system imposed from above. “Employees welcome change when they are sure it benefits them as users,” she explains. Reframing the narrative is often more effective than additional training budgets or compliance mandates, she adds.
Across these perspectives, a pattern emerges. Organisations extracting real value from GenAI tend to share several traits.
They start with clear, high-impact business objectives rather than generic experimentation. They invest in data foundations and governance early. They treat pilots as learning exercises, not failed investments. And, they focus on user-centric design, addressing frontline pain points before scaling. They also recognise that GenAI is only one part of the AI landscape. Predictive models, optimisation tools and domain-specific systems often deliver more immediate returns when aligned with business needs. Most importantly, they approach GenAI as a transformation challenge, not a software deployment.
As MIT’s research suggests, the GenAI divide is less about who has access to advanced models and more about who is willing to rethink how work gets done.
For business leaders navigating this shift, the lesson is clear – the hardest problems are not technical; they are organisational.