📄 stop-prompting-start-defining-outcomes.md02/05/2026
Documents ›

Stop Prompting. Start Defining Outcomes.

Stop Prompting. Start Defining Outcomes.

Most people are using AI like a slot machine. Pull the lever. Watch the wheels spin. Hope it lands on something usable. When it doesn't, they pull harder. Better prompt. Different model. Maybe a longer system message this time.

That's not making. That's gambling.

I spent a year doing exactly this. Punching prompts at midnight. Watching the output come back generic. Blaming the model. The model wasn't the problem. My briefs were three lines long. Of course the output was three lines deep. I was bringing nothing to the table and asking the AI to bring everything, and then complaining when what came back didn't have my fingerprints on it. It couldn't. I hadn't given it any.

The makers I respect, the ones shipping work I'd pay for, aren't doing any of that. They're not refining their prompts. They're refining their briefs. They've stopped asking AI for things and started telling it what they're building. They've moved from gambling to direction.

This is the difference. And once you see it, you can't unsee it.

Outcomes, not prompts

A prompt is a wish. An outcome is a brief.

When a director hires a DP, they don't say "shoot me a video." They give a brief. Audience. Runtime. References. Tone. Three frames they want to land. What success looks like. What it absolutely cannot look like.

That brief IS the work. The shoot is downstream of it.

AI deserves the same respect. If you can't write a brief tight enough to hand to a junior, your prompt isn't going to save you. The model isn't going to fill the gap your thinking left open. It'll fill it with the average of the internet, which is exactly what generic output looks like. Average.

So stop asking. Start defining.

The brief is the work. Everything downstream is execution.

When I shifted from prompting to briefing, two things happened. The output got dramatically better. And I stopped feeling like I was begging an algorithm for scraps.

A scar from earlier this year. I lost a full render cycle on a carousel because I told the AI "less wash" instead of specifying overlay colour. Black wash and white wash are different things. Reducing white wash makes the image lighter, not darker. The model wasn't being stubborn. My brief was ambiguous. It made a reasonable interpretation of an unreasonable instruction, and I burned an evening as the price.

The lesson wasn't "AI can't read your mind." The lesson was "your brief is the contract." If you'd hand the same brief to a junior on set and watch them produce the wrong shot, the brief is wrong. Fix the brief. Don't blame the junior. Don't blame the AI.

Context is king

Models are interchangeable. Context is yours.

I'll say that again because it's the line that should change how you think about this.

Models are interchangeable. Context is yours.

GPT, Claude, Gemini, whatever ships next. They're commoditising. Six months from now you'll have a faster one, cheaper one, smarter one. None of that is your moat.

What's yours is everything you bring to the brief. Your references. Your voice rules. Your past work. Your taste, encoded into examples. Your industry. The decade you've spent learning what works and what doesn't.

That's the moat. That's what makes the same model produce generic output for someone else and your-output for you.

MODELinterchangeable
PROMPTcheap
WORKSPACE & TOOLING
VOICE RULES, STYLE GUIDE
REFERENCE LIBRARY, PAST WORK
12 YEARS / INDUSTRY KNOWLEDGE
YOUR MOAT
Models are interchangeable. Context is yours.

A bad prompt with the right context beats a great prompt with no context, every single time. I've tested this. So has anyone who's serious.

Stop trying to write the perfect prompt. Build the context that makes any prompt land.

The 10% trim pass

Here's what AI does to your timeline.

Old craft: 90% grind to get to 90%, 10% polish for the last bit. The grind dominates everything. Most of your day, most of your week, most of the work.

New craft: AI hands you 90% in 10% of the time. The grind compresses to almost nothing. Then you spend the other 90% of your time on the trim pass. The polish. The taste call.

Old craft

GRIND TO 90%
POLISH

New craft

AI HANDS YOU 90%
TRIM PASS / TASTE / POLISH
this is where your craft lives now
The grind compresses. The polish expands. Your taste is the bottleneck now.

This sounds like a win. It is. But it has a sharp edge.

Your taste is now the bottleneck.

When the grind was the bottleneck, taste was a luxury. You'd ship okay work because you ran out of time. Now you have time. The only thing standing between your output and great is whether you can see what's wrong with the 90% AI handed you.

So train your taste. Look at more work. Cut more. Critique your own output the way you'd critique a junior's. The trim pass is where the craft lives now. Spend the time you got back on getting better at the part that's still yours.

Make it the tool you need

The default AI is built for everyone. Which means it's built for no one in particular. Definitely not you.

Your AI should bend toward your craft. For me, that's video and stills. So my workspace knows what a brief looks like for a campaign. It knows my reference library. It knows the difference between how I write for LinkedIn and how I write for Skool. It knows my voice rules cold.

None of that came pre-installed. I built it. Piece by piece. Every time it got something wrong, I noted what it missed and added the context that would have prevented it.

Your AI plus your workspace should be an extension of your existing abilities, not a replacement for them. If you're a photographer, it should know light. If you're a writer, it should know your sentence rhythm. If you're a director, it should know your visual language.

This is the part most people skip. They use the default and complain it's generic. The default IS generic. It has to be. You're the one who has to make it specific.

Pushing Frames as proof

Pushing Frames is an image-gen plugin built at Pushing Squares. It produces on-brief frames, repeatable, that look like they came out of a campaign and not out of an AI demo.

It works because of what loads behind it.

AI MODELreplaceable
PUSHING FRAMES PLUGIN
PS VOICE SPEC
CxN REFERENCES (decades, stills + film)
12 YEARS ON SET, CAMPAIGNS, VISUAL LANGUAGE
WHAT MAKES THIS NOT GENERIC
The model is the lightest layer. The context underneath is the heavy bit.

Twelve years on set. Campaign work for clients you've heard of. CxN's reference library, built across stills and film, decades deep. Visual language that came from being a creative director, not from reading prompt engineering posts. A voice spec locked into the workspace. Tooling shaped specifically for stills and video output, not for general-purpose images.

That's the stack. The model is the lightest layer. The context underneath is the heavy bit.

What does twelve years load into a brief? Things that don't write themselves up well in a single prompt. The way light reads on skin under a specific kind of overcast. Why a campaign frame with too much negative space looks expensive and one with not enough looks cheap. Which references are honest references and which are mood board theatre. The difference between a frame that's clean and a frame that's empty. None of that lives in any model's training data with your name on it. It lives in your eye, and the only way it gets into the AI is if you put it there, on purpose, in the brief.

And here's the meta point. Pushing Frames itself was built using this method. Outcome-defined. Briefed against industry context. Iterated on the tool while building the tool. The first version was rough. The brief for the rough version included reference frames I wanted, voice rules for how the plugin should describe itself, and a strict definition of what "on-brief" meant for the output. Each release tightened the brief and the output got closer to the campaign work that inspired it. The plugin grew up the same way a junior does. Trained on real briefs. Corrected when wrong. Trusted more over time.

Flow loop diagramDEFINE OUTCOMEwhat does a good frame look like?TIGHT BRIEFindustry context + style guide + voice rulesITERATE WITH TOOLbuild the next version of the toolSHIP FRAMESoutputs feed back into the brief
The tool that helps you make tools was built using the method it teaches.

The tool that helps you make tools is itself a product of this method. That's not a marketing line. That's the actual recursion. You can pull the repo and read the briefs.

The users should be the makers

Here's the next move once you've got this far.

If you're using a tool you didn't build, you're inheriting someone else's assumptions. Their interface choices. Their defaults. Their idea of what your job is.

Sometimes that's fine. Most of the time it isn't.

The best tools are made by the people who need them most. Pushing Frames was made by a creative director, for creative directors. Aris Space was made by a maker who wanted a knowledge-sharing site that didn't feel like a Substack. The plugins coming out of Pushing Squares are made by people who shoot, edit, and ship daily.

This is the difference between AI users and AI-native makers. The users wait for someone to ship the tool they need. The makers ship it themselves, badly at first, then better.

You don't need permission. You don't need to be a software engineer. You need an outcome, a brief, and the willingness to iterate when version one is rough.

Stop waiting for someone to ship the tool you need. You're the one who needs it. You're the one who should be making it.

Grow with it like an employee

You don't fire a junior for getting it wrong twice. You teach them.

Treat your AI the same way. Encourage what it does well. Highlight what it's missing. Build a brief that compensates for its weaknesses, the same way you'd build a checklist for a new hire.

The relationship compounds. Day one it's a stranger. Month three it knows your voice. Month twelve it's drafting briefs you'd write yourself, which means you can spend your time on the calls that still need a human.

The mistake is treating it as a finished employee on day one and getting frustrated when it can't read your mind. It can't. Nobody can on day one. You wouldn't expect it from a person. Don't expect it from your AI.

What you should expect is improvement. If your output isn't getting better month over month, your context isn't getting better. Fix that.

Summary

Most of what's wrong with AI output is wrong with how people are using it. The slot machine pull. The thin prompt. The default workspace. The wait for someone else to ship the right tool.

None of that is the model's fault.

The makers I respect aren't waiting. They're defining outcomes, building context, training their taste, and shipping tools they need. The output looks like them because the work is them. The AI's the lightest layer.

If you want output that doesn't look like everyone else's, stop prompting. Start defining outcomes. Build the context that's yours. Train the trim pass. Make the tools you need.

I don't gatekeep. But you have to ask the right questions.

Related