The AI Project That Never Gets Scoped
Vague AI initiatives don't die — they consume budget indefinitely. Here's how to kill the cycle before it starts.
Every company doing AI work has one. The project with no end date, no clear output, and a standing weekly meeting that nobody cancels because nobody knows if it’s done or not.
It wasn’t supposed to be this way. It started with a real problem. Then it became a “workstream.” Now it’s just a line item.
How it happens
The original ask was reasonable: “Can we use AI to improve X?” Sure. Let’s look at it.
But “look at it” became “explore options,” which became “build a proof of concept,” which became “let’s expand scope to include Y,” which became a six-month research effort with no deliverable in sight.
At no point did anyone write down: What does done look like?
That’s the failure. Not the technology, not the team, not the ambition. The missing definition of done.
Why scoping AI is harder than scoping software
With a software feature, you can usually describe the output: a button, a screen, a report. With AI, the output is probabilistic. It “improves” something. It “helps” users. It “reduces” time-to-answer.
None of those are measurable without up-front decisions about what you’re actually measuring.
Which means AI projects require more rigorous scoping, not less. You have to decide:
- What’s the baseline we’re improving against?
- What metric moves, and by how much, before this is a success?
- What’s the cutoff — the point where we ship what we have or kill it?
- Who approves that decision?
Most teams skip this because it’s uncomfortable. Pinning a number feels like setting yourself up to fail. Vagueness feels safer.
It isn’t.
Vagueness isn’t safety. It’s slow burn.
When a project has no definition of done, it can never fail. It can also never succeed. It just… continues. Burning sprint capacity, blocking roadmap slots, tying up your best engineers in something nobody can evaluate.
The team isn’t lazy. The stakeholders aren’t incompetent. The problem is structural: the project was started without a contract.
Fix it in the kickoff
Before any AI initiative spins up, answer these in writing:
1. What is the measurable outcome? Not “improve accuracy.” Improve accuracy from X to Y, measured by Z.
2. What’s the deadline? Not “as soon as possible.” A real date. If the answer isn’t there by then, the answer is “not yet” or “not this way.”
3. Who makes the go/no-go call? One person. Not a committee. Not “alignment.”
4. What’s the fallback? If this doesn’t work, what do we do instead? (If you have no answer, the fallback is “keep spending indefinitely.”)
The uncomfortable truth
Most never-ending AI projects aren’t technically hard. They’re politically indefinite. Nobody wants to be the person who called it done — or killed it.
The solution isn’t better AI. It’s a decision-maker with a deadline.
Scope it like it can fail. That’s the only way it can actually succeed.