Most AI builds fail before they start. These four questions fix that.
Most AI builds fail before the first prompt. Not because the code is bad. Because the idea was wrong.
Wrong scope
Overbuilt tools
The problem with starting immediately
AI makes starting so easy that the decision to start has basically disappeared. Two years ago you'd spend a week on setup before anything worked. Now you have a first version in under an hour. That speed is real. But it creates a new problem.
Most failed AI builds don't fail because the AI produced bad code. They fail because the idea was wrong from the start. Wrong scope, wrong constraint, wrong problem entirely. The AI builds exactly what you asked for, and what you asked for wasn't what you needed.
I've done this more than I'd like to admit. Started with an idea, opened Claude, spent two hours building something, then realised mid-session that I was solving the wrong thing. The builds that worked had something in common: I thought before I prompted.
The decision matrix is just four questions. You should be able to answer all of them in five minutes. If you can't, that's the answer.
The builds where I skipped these questions each cost me two or three hours to undo. The questions take five minutes.
Question 1: friction
Not "what is the problem" but "what is causing it." These are different questions and they point to different solutions.
The problem: I don't know where my money goes. The friction: every app I tried required me to connect a bank account, set up categories, and navigate a dashboard just to see a single number I wanted. The data was there. The friction was the path to it.
If the friction is navigation, you don't need a smarter app. You need a faster one. If the friction is that data isn't being captured, that's a different build. If the friction is that you have to think too hard to use the thing you already have, that might be a workflow problem, not a software problem.
Ask yourself: what is the specific moment that is painful? Not the general situation. The specific moment when you hit the wall and stop.
Real example: the Spending Reality Check came from a friction question. The data existed. The path to it was the problem.
Question 2: engine
What is the smallest version that's still useful?
There's always a version that does everything. Accounts, sync, categories, charts, reminders, export, sharing. That version takes months. The question is whether a much smaller version would remove 80% of the friction.
For the spending tracker, the smallest useful version was three numbers on a screen: this week, last week, the week before. Plus a form to log a new entry. No categories, no charts, no history beyond three weeks. Just that comparison. It turned out to be all I actually needed.
If you can't describe the minimum useful version in one sentence, the scope is not clear yet. That one sentence is what your first prompt is built from. If you don't have it, your first prompt will be vague, and vague prompts produce the wrong first version every time.
Real example: the Should I Buy This engine question: a quick verdict tool, nothing stored. One sentence. That became the prompt.
Question 3: restraint
This is the question most people skip. It is also the most important one.
The AI's default is to add. More features, more views, more options. Without an explicit constraint, every iteration drifts outward. By the third session you have something that does twelve things and none of them well.
The restraint question forces you to draw a line before you start. What categories of feature are outside scope? What would make this tool worse, not better? What would a user reasonably expect that you're deliberately not giving them?
Write the answer down before you open Claude. "This tool will never have accounts." "This tool will never store data on a server." "This tool will never show a chart." Then include it in the first prompt. The AI will still try to add things anyway. But the explicit boundary gives you something to point to when removing them, and it makes the removal a decision rather than a debate.
Real example: the Story World Reaction Engine restraint question: no persistence, no character sheets, no branching. The AI kept suggesting all three. Having the constraint written down made cutting them fast.
Question 4: decision
Should this even be built?
Sometimes the answers to the first three questions reveal that the problem doesn't need a new tool. It needs a different habit, or a better use of something that already exists.
That's worth knowing before you build anything.
A few signals that the answer is no: the friction is minor and you mostly tolerate it fine, the minimum useful version still requires a complex data model or backend, or the problem happens so rarely that a general-purpose tool already handles it well enough. Building something with AI is fast. But it is still time, and the goal is a tool you actually use, not a tool you built.
If the answer to any of the four questions is "I'm not sure," don't start yet. Figure that out first. The Idea Clarifier is useful at exactly this stage: it asks a few focused questions and forces the clarity these four questions are looking for.
Real example: the process walkthrough shows what happens when all four answers are clear before the first prompt. The build takes one session instead of three.
A real example: the spending tracker
Before building the Spending Reality Check, I ran it through all four questions.
Friction: I wanted to know whether my spending was drifting week by week. No app showed that comparison directly. I had to scroll through months of transactions to piece together what I wanted. The friction was the navigation, not the data.
Engine: the minimum useful version was one screen with three weekly totals and a form to add an entry. That was the whole thing.
Restraint: no bank connections, no accounts, no categories, nothing that requires setup. One file. Opens in a browser. Data stays local. I wrote this down before writing the first prompt.
Decision: the friction was real and frequent, the minimum version was genuinely small, and the constraint made it achievable in a weekend. Build it.
The session took about two hours. The tool has been in daily use since. The full process is in the build case study.
When the matrix says build it anyway
Not every build needs to pass all four questions cleanly. Sometimes you're building to learn, or to test whether an idea has legs. That's fine. The matrix isn't a gate. It's a way to go in with eyes open.
The difference between a good use of a weekend and a frustrating one is usually knowing beforehand which kind of build you're doing. If you're learning, expect to throw it away. If you're building for use, answer all four questions first.
Every build on this site passed all four questions before the first prompt was written. That's not a coincidence. See the builds.