profile

Exec Edge

A free weekday newsletter built for founders, CEOs, and senior leaders who are trying to stay sharp across strategy, people, negotiations, financials, and their own performance.

Feb 27Β β€’Β 3 min read

95% of AI projects return nothing


February 27, 2026


Hi Everyone,

A July 2025 MIT study reviewed over 300 enterprise AI initiatives and found that 95% of them produced no measurable impact on profits.

U.S. businesses had collectively spent between $30 and $40 billion on generative AI at that point, and almost all of it went to pilots that stalled or features that never reached production.

The 5% that did work shared a common pattern.

Today, we're walking you through a simple gating process you can apply before any AI initiative gets budget, headcount, or a spot on your roadmap.

What the 5% are doing differently

They focused on one specific workflow

The successful companies identified a single pain point with a measurable cost, usually in back-office operations like document processing, claims handling, or internal support.

Over half of AI budgets went to sales and marketing, yet back-office automation delivered faster payback.

The companies that saw returns went where the ROI was clearest, even when it was less visible to leadership.

They bought before they built

Externally sourced AI tools succeeded about 67% of the time, while internally developed tools succeeded about 33%.

Lead researcher Aditya Challapally told Fortune that almost everywhere his team went, companies were trying to build their own solutions, even though purchased tools delivered more reliable results.

They set the pass/fail line up front

The winning companies defined a revenue, cost, or time-savings target before deployment.

Projects without a clear business metric drifted into permanent pilot mode.

What these numbers look like in practice

​Bank of America deployed AI coding assistants to 18,000 developers, resulting in a 20% productivity boost.

CEO Brian Moynihan said AI techniques cut 30% of the coding work required to launch new products, saving the equivalent of roughly 2,000 roles.

​UnitedHealth is targeting roughly $1 billion in AI-enabled operating cost reductions in 2026, backed by $1.5 billion in planned AI investment.

They've deployed over 1,000 AI use cases, each scoped to a specific workflow, such as claims processing or member support.

​Workday reported that net new annual contract value from AI products more than doubled year over year, with over 75% of new deals including at least one AI solution.

They tied AI directly to subscription revenue growth rather than treating it as a separate initiative.

A gate for your next AI proposal

Before any AI project gets approved, run it through these five questions. If a proposal can't answer at least four of them clearly, it's most likely not ready.

1. What specific workflow does this change?

A good answer names the process and the people affected.

"Customer onboarding for mid-market accounts" works. "Improving efficiency across the org" doesn't give you anything to measure.

2. What business metric are we trying to move?

Pick a number your finance team already tracks – cost per transaction, time to close, support ticket volume, revenue per rep.

If the project can't connect to an existing metric, you won't be able to tell whether it worked.

3. What does success look like in 90 days?

Set a threshold that would justify continued investment.

You might aim to reduce average review time from 4 hours to 2, or cut manual data entry by 40% for the pilot group.

4. Should we buy, build, or partner?

The MIT data strongly favors buying or partnering over building internally.

Building makes sense when the project depends on proprietary data that gives you an edge competitors can't replicate. For everything else, evaluate existing tools first.

5. Who is accountable for hitting that metric?

Assign one person who owns the result, with a clear timeline for reporting back.

Without that, the project will end up in the quarterly review with vague progress updates and no hard numbers.

Try this today

Pick your weakest AI project (the one you'd have the hardest time defending in a board meeting).

Then run it through the five questions and write down where the gaps are.

If it fails on three or more, bring those gaps to your next leadership meeting and let the criteria make the case.

Go deeper

πŸ‘‰ Fortune: MIT report: 95% of generative AI pilots at companies are failing – interview with lead researcher on what separates the 5% that succeed

πŸ‘‰ Harvard Business Review: Stop Running So Many AI Pilots – why going deep on one domain beats spreading AI experiments across departments

πŸ‘‰ Harvard Business Review: Beware the AI Experimentation Trap – how to avoid repeating the same mistakes companies made during digital transformation

πŸ‘‰ McKinsey: The State of AI – global survey of 1,993 organizations on what high performers do differently when scaling AI

Coming up on Monday

On Monday, we're breaking down how to decide between creating a category and winning inside a crowded one.

That's it for this week! Have a great weekend.

​
P.S. How many AI projects is your company running right now, and how many have a clear metric attached?

600 1st Ave, Ste 330 PMB 92768, Seattle, WA 98104-2246
​Unsubscribe Β· Preferences​


A free weekday newsletter built for founders, CEOs, and senior leaders who are trying to stay sharp across strategy, people, negotiations, financials, and their own performance.


Read next ...