The Ten Decisions Between AI Investment and AI Impact

Mar 17, 2026

Three questionsI was in a board meeting when a CFO asked me why his company had spent millions on AI initiatives and had nothing to show for it. Not failure in the dramatic sense. The projects worked. The models performed. The dashboards displayed metrics. But nothing had changed in how the business actually operated. Revenue hadn’t moved. Costs hadn’t fallen. The organization was doing the same work with the same people, just now with better infrastructure underneath it.

He wanted to know what had gone wrong.

The answer wasn’t that AI had underdelivered. It was that the company had made decisions that, in isolation, seemed reasonable but in sequence, guaranteed they would never see a return. They had invested in AI without deciding what impact they wanted first. They had chosen platforms before outcomes. They had measured activity instead of change. And by the time they recognized the mistake, millions had already been committed.

This is the median outcome, not the exception. MIT’s Nanda Center found that 95% of AI pilots generate zero ROI. Most of that isn’t a technology problem. It’s a sequencing problem.

An operating partner I know uses a three-question test with every AI project her firm considers. What is the primary metric — not the vanity metric, not the number that looks good in an investor update, but the single number that determines whether this project succeeded or failed. Who owns the outcome — not who sponsors it or approves the budget, but whose compensation depends on hitting that metric. And what changed: if you launched this project and the metric hit the target, what is different about how the organization operates.

Most projects fail the third question before they’re proposed. The team has imagined the technology working, the model accurate, the integration complete, the team trained. They haven’t imagined the business operating differently as a result. Most AI investment disappears in that gap.

Sequence is where this goes wrong. When you start by choosing a platform, you spend months fitting your problem to the tool instead of fitting the tool to your problem. You pilot something because the vendor has a use case template, not because it matters to your business. You measure what the platform makes easy to measure rather than what determines success. Then you present results to leadership, and they wonder, as that CFO did, why millions of dollars hasn’t moved a single operational metric.

Starting with the outcome changes the work. Not “use AI” — a specific metric that represents something the business actually cares about. The speed of contract review. Whether inventory forecasting is accurate enough to reduce carrying costs. Something where the difference between success and failure is measurable and material. Use case selection follows from that. Data assessment follows from that. Tooling follows from that. You assign an executive owner whose job depends on delivering the result, not a project manager who moves it forward. You plan to execute in ninety days instead of running a perpetual pilot. You measure the actual metric every week from day one, not the model’s performance in isolation.

The companies that generate real returns from AI are disciplined about sequence. They resist the urge to start with platforms. They decide what they need to change before they invest in the infrastructure to change it. They assign ownership in a way that creates accountability that survives the next reorganization. Most companies skip this because buying the tool feels like doing the work — the purchase announcement goes out, the implementation team assembles, the pilot launches, and by the time anyone asks whether the metric moved, the budget is spent.

It isn’t doing the work.

The CFO in that board meeting wasn’t unlucky. He had made reasonable decisions in a reasonable order — and that order was the problem. Millions of dollars later, he had excellent infrastructure for a business outcome nobody had defined.