I tried to build a story engine. It kept turning into something bigger. Here's what actually shipped.
I wanted a simple tool that generates world reactions for writers and game designers. The AI had other ideas. The scope shrank three times before it became useful.
This is a real AI build created in one weekend using Claude. It generates story world reactions to a single event from multiple perspectives. Here's what worked, what failed, and what got cut.
No character simulator
No persistence between sessions
Where it came from
The idea came from a tabletop RPG session. A player made a decision that should have consequences across several factions in the world, and I needed to figure out how each group would react before the next session. Writing it all out manually takes time. More importantly, it breaks the creative flow. You stop thinking about the story and start doing administrative work.
I wanted a tool that could take a brief description of a situation and generate plausible reactions from multiple perspectives quickly. Not a story generator. Not a character simulator. Just a way to answer "how would different parts of this world respond to what just happened?" without spending an hour writing it by hand.
The use case is narrow on purpose. Writers planning scenes, game designers exploring consequences, screenwriters testing cause and effect. Anyone who needs to quickly see a scenario from several angles before deciding what to write.
Before starting the build, I ran the idea through the build decision matrix. The friction was clear, the minimum version was genuinely small, and the restraint question had an obvious answer: no persistence, no state, reactions only. That clarity is why the session stayed on track.
What it actually does
You describe three things: where the story is set, what event just happened, and who is affected or present. The tool sends that to Claude and returns short reactions from several perspectives: the protagonist's immediate response, how nearby characters read the situation, and how more distant groups or factions might hear about it and respond.
Each reaction is two or three sentences. The format is deliberately short. The goal is not to write the story for you. It's to give you material to react to. A direction, a tension, an implication you hadn't thought of. You take what's useful and ignore the rest.
That's the whole tool. It does one thing. Nothing is saved. Nothing is tracked. Each session starts fresh. The pattern is the same as the Spending Reality Check: single-session input, deliberate restraint, no persistence by design.
The constraint: this tool generates reactions to a moment, not a story. No world state, no persistence, no character memory. One input, one output. That's it.
The first prompt and what came back
The first prompt asked Claude to generate a story world reaction engine as a single HTML/JS page with a form that accepts setting, event, and characters, then returns formatted reactions from multiple perspectives.
The first version worked. It also came back with a character sheet builder, a world state tracker, a session history panel, and the ability to save and load worlds as JSON files. None of that was in the prompt. The AI generated a feature-complete creative writing assistant, which is a completely different tool from what I wanted.
The second prompt was: remove everything except the input form and the reaction output. Three fields in, formatted text out. No sidebar, no history, no persistent state.
That produced the actual first version.
What failed
The first working version failed immediately. I asked for reactions and got three full narrative paragraphs per perspective. Wall-to-wall prose. Interesting to read, completely unusable as a tool. The whole point was a five-minute scan, not another document to process. One prompt change fixed it: "two to three sentences per perspective, no more." That constraint should have been in the first prompt. It wasn't.
The second failure was the labels. "The protagonist reacts..." and "Secondary characters notice..." are stock template phrases. They made every output feel generic regardless of how specific the scenario was. Replacing them with user-defined fields fixed it: you type in your actual character names and faction names, and the output immediately felt like it was about your world instead of any world.
Both fixes were prompt changes. The interface didn't change at all.
What got cut and why each cut mattered
Character persistence
The AI kept suggesting a way to save character profiles between sessions. Cut because it turns a reaction tool into a character manager. Those are different tools. Character management is a bigger problem and a bigger build. This tool is for a single moment, not an ongoing campaign.
Branching story trees
An early version had a "follow up" button that generated the next scene from the reactions. Cut because it pulls the user toward generating content rather than using the reactions to make their own decisions. The tool's job is to show options, not choose them. Once it starts generating the next scene, it's a story writer, not a reaction engine, and the user stops thinking.
World templates
There was a draft version with preset templates: fantasy kingdom, sci-fi colony, modern thriller. Seemed useful. Cut because templates assume a setting and immediately constrain how you describe your world. The whole point is that your world has specific details that a template doesn't know about. Removing templates forced users to describe their actual world, which produced better reactions.
Export and save
Cut for the same reason as in other builds: export creates the impression that the output matters beyond the current session. For most uses it doesn't. You read the reactions, take what's useful, close the tab. If something is worth saving, copy it. Adding an export button adds surface area without adding value.
Faction relationship tracking
This was the feature that would have turned the project into a six-month build. A relationship graph between factions, updated as events occur, so reactions would reflect history and tension over time. It's a genuinely interesting idea. It's also a completely different product. Cut immediately once it became clear what it would require.
What shipped
A single HTML file. Three input fields: setting, event, perspectives. One button. The output is a short block of text with clearly labelled reactions, each two to three sentences long. The whole interface fits on one screen without scrolling.
The build took one weekend session. The constraint (no persistence, no world state, reactions only) was defined before the first prompt. That definition is why the session stayed on track after the first version came back with too many features.
It works well for exactly the use case it was built for: spending five minutes exploring how a story world responds to a single event before deciding what to write.
What I'd do differently
The perspective labels (the names of the characters and factions providing reactions) took too long to get right. I should have decided the input model before writing the first prompt: user-defined labels rather than fixed generic ones. That would have been two prompt iterations shorter.
I'd also be more explicit in the first prompt about output length. "Two to three sentences per perspective" should have been in the first prompt, not the second. The AI defaults to comprehensive unless you constrain it. Length is a constraint worth stating upfront.
If you want to build something like this
The pattern here is one that works for a lot of creative tools built with AI: single-session input, structured output, no persistence. It's fast to build, fast to use, and the constraint forces you to define what the tool actually does before you start adding to it.
The hard part is resisting the features that sound useful but belong in a different tool. Character persistence sounds good until you realise it's a six-month project. Branching trees sound good until you realise they change what the tool is for. The cuts listed above each represent a moment where the build could have gone sideways.
Before you build a creative AI tool, the decision matrix is worth running through. The question "what should this never do?" is particularly important for tools that interact with AI, because the AI will always suggest expanding scope and adding features. The constraint has to be in your head before you write the first prompt, not negotiated after the first version comes back.
The process walkthrough covers how to structure that first session if you're building something like this for the first time.