2020–2022: The quiet before the storm
Before ChatGPT entered the conversation in late 2022, enterprise AI strategy was a slower, more methodical game. Companies invested in machine learning for demand forecasting, fraud detection, recommendation engines. The work was real, often valuable, but rarely headline-grabbing.
The dominant strategic approach was the "Center of Excellence" model. A small team of data scientists, often tucked inside IT, tasked with proving that AI could deliver ROI. McKinsey's annual surveys during this period consistently showed the same thing: early winners aligned AI investments with specific business outcomes. They did not chase technology for its own sake. The companies that changed processes to facilitate organizational learning with AI captured the most value.
The limitation of this era was scope. Most organizations ran a handful of use cases that never scaled beyond a single department. But the discipline of tying AI projects to measurable outcomes was, in hindsight, a strength many would later abandon.
2023: The explosion
ChatGPT changed the equation overnight. Suddenly, every executive had a personal reference point for what generative AI could do. Strategy cycles compressed from quarters to weeks. According to the Stanford HAI 2025 AI Index Report, AI business usage jumped from 55% in 2023 to 78% in 2024. GenAI investment surged to $33.9 billion globally. The race was on.
But the speed created a new failure mode. Companies that had spent years carefully scoping AI use cases suddenly faced board mandates to "do something with AI." Fast. The strategic playbook shifted from bottom-up experimentation to top-down urgency. Often without the infrastructure, data governance, or workflow redesign to support it.
Snapchat's rollout of My AI in 2023 became one of the most visible examples of strategy following hype rather than value. The company embedded an OpenAI-powered chatbot at the top of every user's chat feed, assuming novelty would drive engagement. Instead, users found it intrusive. App store ratings cratered. Searches for "delete Snapchat" spiked nearly 500%.
The lesson was not that AI chatbots are bad. The lesson was that adding AI to a product without understanding what problem it solves for the user is not a strategy. It is a feature launch dressed up as one.
The prompt engineering detour
One of the more instructive missteps of this period was the corporate obsession with prompt engineering. In 2023, organizations poured significant resources into dedicated prompt engineering roles, training programs, and internal skill academies. A McKinsey Global Survey found that 7% of organizations adopting AI had already hired dedicated prompt engineers. Job postings promised six-figure salaries. Universities launched prompt engineering certificates. The assumption was clear: mastering how to talk to AI would be a durable competitive advantage.
It was not. By 2025, the role was widely considered obsolete. Models had become capable enough to interpret vague inputs, ask clarifying questions, and self-correct. As Microsoft's Jared Spataro put it, generative AI can now essentially prompt itself. Nationwide CTO Jim Fowler captured the shift more bluntly: prompt engineering is becoming a capability within a job title, not a job title in itself.
The companies that invested heavily in prompt engineering academies learned a painful lesson about betting on technique rather than strategy. The skill did not disappear entirely. It got absorbed into general AI literacy, much like spreadsheet skills in the early 2000s. But the dedicated roles, the training budgets, the certification programs? Largely wasted. Andrej Karpathy, former Tesla AI director and OpenAI co-founder, reframed the conversation in mid-2025 by coining the term "context engineering." His point: the real skill is not crafting a single clever prompt. It is designing the entire information environment the model operates in. That is a systems architecture problem, not a writing exercise.
This is what happens when organizations optimize for the current state of a technology instead of building capabilities that endure across technology shifts. The pace of AI advancement punishes tactical bets. It rewards structural ones.
2024–2025: The reckoning
By 2025, the data was in. And it was sobering. MIT's "GenAI Divide" study reviewed over 300 publicly disclosed AI implementations and found that only 5% had generated measurable profit-and-loss impact. S&P Global Market Intelligence reported that the share of businesses scrapping most of their AI initiatives rose to 42% in 2025, up from 17% the year prior.
McKinsey's 2025 State of AI survey confirmed the pattern from a different angle: while 88% of organizations reported using AI in at least one business function, only about one-third had begun to scale. The remaining two-thirds were stuck in what the industry started calling "pilot purgatory." Running experiments that never graduated to production.
The root causes were consistent across studies. Poor data quality. Rigid workflows that AI was simply bolted onto. Unclear ownership. A near-universal lack of meaningful KPIs tied to business outcomes. As a Fast Company analysisput it, most companies treated AI as a cost-cutting instrument. Corporate liposuction. Not a catalyst for building new capabilities.
What actually worked
The roughly 5% of high performers identified across the MIT and McKinsey studies shared a recognizable set of practices. They were not necessarily the ones spending the most or deploying the most sophisticated models. They were the ones with the most organizational discipline.
First, they started with strategy, not technology. The winners defined what business outcome they needed. Revenue growth. Customer experience improvement. Operational transformation. Then they asked how AI could serve that goal. The inverse approach, deploying AI and searching for impact, consistently failed.
Second, they redesigned workflows. McKinsey's data showed that workflow redesign had the single strongest correlation with EBIT impact of any organizational attribute tested. Yet only 21% of organizations using generative AI had fundamentally redesigned even some of their workflows. High performers were nearly three times more likely to have done so.
Third, they invested in governance and human-in-the-loop controls. High performers were far more likely to have established structured AI governance, validation processes, and clear escalation paths. The companies that skipped governance did not just face compliance risk. They produced unreliable outputs that eroded internal trust in AI altogether.
Duolingo offers a counterpoint to the Snapchat story. The language-learning platform integrated AI deeply into its product experience. Not as a gimmick, but as infrastructure that improved personalization and content generation. The result: over 116 million monthly active users and $748 million in revenue by end of 2024, up more than 40% year-over-year. AI was the enabler, not the headline.
The reflection
Looking back at five years of AI strategy, the pattern is almost uncomfortably simple. The organizations that failed treated AI as a technology problem to be solved by deploying tools. The organizations that succeeded treated it as a business transformation problem that required rethinking how work actually gets done.
The irony is that this insight is not new. It is the same lesson we learned from ERP implementations in the 1990s. From cloud migrations in the 2010s. From every major technology wave in corporate history. Technology alone does not create value. The hard work is organizational: aligning incentives, redesigning processes, building capabilities, measuring outcomes.
We are now entering the next phase. Agentic AI. Autonomous workflows. AI systems that do not just recommend but act. McKinsey reports that 62% of organizations are already experimenting with AI agents. If history is any guide, the same 5% who did the hard organizational work will be the ones who extract value from this next wave. The other 95% will deploy agents on top of unreformed processes and wonder why the results disappoint.
Maybe the most strategic thing we can do right now is not to move faster, but to pause long enough to ask the right questions. Not "what can this technology do?" but "what does our organization actually need to change?"
The answer to that question has never been a model.