The Perfect Prompt Comes Last
March 22, 2026
A lot of AI advice focuses on writing the perfect prompt. Add context. Define the role. Set constraints. Structure the request. And sometimes that helps. But most of the time, my best results with AI don't come from a perfect prompt. They come from a conversation. I usually start with a rough thought, talk through the idea in natural language, and only later turn that into something structured if it needs to run on its own. For me, it usually goes like this: Conversation first. Clarity second. Structured prompt last. That's how most of my work with AI actually happens.
Transcript
00:00 — Opening
Hey, welcome to Slow Builds.
Something I see a lot when people talk about AI is advice about writing the perfect prompt.
And for a while I thought that there must be a key to using it well. Like that perfect prompt, before you even type anything in there, there's this fear that you have to get it correct. You had to structure the response the way you wanted.
And honestly, that is the best advice ever.
But I actually do, and I do that myself, but the interesting thing is that it's not really how most of my AI work begins.
Most of the time, the prompt box isn't where I carefully craft instructions. It's where I start the conversation.
Because I was finding that what I was doing was I was afraid to start typing until I thought I was telling it or asking exactly what I needed, what I wanted, and how to respond to me.
01:21 — The Idea of the Perfect Prompt
But if you watch videos about AI, there's a lot of focus on that.
People show these very carefully scripted prompts that match that entire thing. They make it feel like it's like a little program and you got to script it the right way.
They define the roles, the format, the tone, the constraints. And the idea is that if you write the prompt well enough, the AI will give you the exact answer you want.
And that's the scary part. You're trying to coach it into giving you what you expect.
But in my experience, that's not where the real value shows up.
02:05 — When Structured Prompts Actually Help
There are definitely situations where structure helps.
If I'm not working on code, if I'm working on code that I already know, and I always go back to a developer mindset, 'cause that's what I do mostly, and I find the most event — well, I don't find the most event with AI, but it helps, it's a great place for me to use it.
And so when I am working on code and I already know what I'm trying to build, I might say, act like a senior developer, review this. Or in other cases, like if I'm going through my taxes — I just did that this morning for myself and my kids — and my accountant wants things in a certain way. So I would say, here are the documents. Here's an Apple note that has all the information for this person. Please review. This is what my accountant wants. These are my documents. Prepare it for them. So act as a CPA, act as a CFO.
And that works. When the task is very clear, those instructions help. They've reduced token usage. They speed up the process, and they help like infinite loops of retrying and doing things wrong.
So definitely structured prompts, you have to do it in certain cases, but most of the time when I open a conversation, I have no idea what the answer is that I want. I'm not trying to coax it or coach it into telling me what I want. I'm trying to get to an answer.
03:54 — Starting With a Thought
So usually what I have is just a thought. I have a rough idea, questions that they're not formed yet.
So I'll open the chat, pipe something like, I've been thinking about this idea, or does this approach make sense? Or I'm trying to think of what I did today. Like my kid goes to university in the States. It's a degree, but they offer it online. So they're here. So I'm like, when we're doing taxes, so it's a question. Okay, they go to this university in the US, it's a real degree, it's not an online course, it's just they can do it online. Do they offer the T1, I think it is. And then it gave me like, it wasn't sure — in the US they'll get this, here's a structured email and a link to a document you can provide to find out.
So I had no idea what I was doing.
04:53 — Natural Language
But it's like I said, like I was just typing it. I wanted to know one answer and other times like I mentioned in another one where I'm on a walk with my wife and she was talking about chlorine levels in the pool. So I just opened up the app, turned on the microphone part of it and we just start talking. And I'm not trying to format those in a certain way, I'm just basically talking to it like another person. And I know that that person is going to understand what I'm saying and that person is going to be pretty much the expert in any field of anything I ask or bring up. But either way, it's just natural language. It's not structured, it's not engineered, it's just me talking through something.
05:55 — Natural Language Works Surprisingly Well
And one thing I've noticed is that the system is very good at understanding that normal conversation.
You don't have to speak like you're programming it. You can just explain what's in your head, half sentences, messy thoughts, changing direction halfway through. And it usually understands the context extremely well.
So instead of spending time trying to write the perfect prompt, I just start talking, explaining, going through a situation, interrupting it, letting it interrupt me. I'm steering it as we go.
06:32 — The Back and Forth
And from there, it becomes a dialogue with the AI that starts messy and sometimes it's helpful, sometimes it misses my point completely, sometimes it suggests something I haven't even thought about so it makes me move to a different direction. It helps me reframe my thought process and I push back on it sometimes and but together we were adjusting, we're figuring out, and slowly as we do it — I hate saying we because it is just a computer — but it's to me it's a vast bucket of knowledge of every field you can imagine basically.
And you can guide it and together you can slowly make your ideas become clear.
07:46 — The Role Changes Naturally
And then another thing I noticed is that the role naturally shifts between depending on the topic.
And that's what I'm trying to get at there is if we're talking about code response, I didn't tell it to act like a senior developer, but it starts responding as if it's a product manager, product owner, a senior level developer, engineer. And then I'll quickly just jump over to a finance question. I'll ask about a stock or review this or I got an assessment today. And then all of a sudden it automatically switches. Now it's my CFO. If I'm thinking all about life decisions, it becomes more of a neutral sounding board. It acts like a therapist really, whether it's family, relation, or just personal.
And I don't always have to define these roles ahead of time. It picks it up through the context of the conversation and it figures out what I'm trying to get at.
08:51 — A Place to Think Out Loud
The best way I found to describe this is that AI has become a place to think out loud.
Sometimes it feels like a coworker helping through technical problems. Sometimes it feels like an advisor. And sometimes it's just somewhere to talk through an idea until the messy parts start making sense.
The value isn't in that first answer in my mind. I think the value in that is the entire conversation.
09:20 — Where the Gold Nugget Appears
And most of the time, the interesting part shows up later in the thread.
So as you start that conversation, you're going deep, you start with the rough idea, you're going back and forth, you explore different angles, and eventually there's a small moment where something clicks — a clear way to frame the problem, a better way to structure the idea, that little gold nugget — and that almost never comes from that first prompt. It comes from everything that happened after you started.
You sat down, you got the fear out, and you just started typing and having that back and forth.
I always say there's no dumb questions, there's no stupid questions or stupid answers. That's how you got it in my mind. That's how I treat the AI and the prompt interface. It's a place where I can safely just... It's my safe place to ask anything I want, let my mind go crazy.
10:21 — When the Perfect Prompt Actually Matters
But there are places where that perfect prompt has to be right.
And that's when I'm building something automated. For example, when I'm creating AI agents, and they're going to run automatically, agents are different from chats. A chat can be messy, you can correct it, you can steer it, but an agent has to run on its own — that work the same way every single time. You need efficiency, you need accuracy, you're depending on something.
11:00 — Using Conversation to Build the Prompt
So when I'm building an agent, I'll actually use my chat conversation to figure it out.
I'll open ChatGPT if it's personal, I'll open Claude if it's work, and I'll just start talking through the process. Especially at work, with Claude, it has access to the code. Even now with ChatGPT, you have Codex and it has access to my personal code.
So we go through it, we're looking at lines of code, we're figuring out what needs to happen. We start exploring different ideas, we reframe the instructions, guardrails are set up, context is set up, very precise, inputs, outputs, and eventually we arrive through testing and everything else. We'll find this perfect prompt and it'll expect the same input every time and I should get the same structured output each time. There may be deviations here and there, but it's always structured the same way.
12:00 — Why That Pattern Matters
So that pattern matters, it really matters.
It makes a big difference and that's a place where you have to do it 'cause it's repeatable and you're not expecting the same answer every time but you're expecting the answer to be formatted the same way every time, with the same fields. I'm thinking like a coder again. That's why it's good to have those forms, those checklists, those constraints. They make a difference when the agent's running on its own without human input. It's all code based, so it has to know what to do.
12:42 — The Pattern
But outside of that, my normal pattern with AI usage really is conversation first, clarity second, structured prompt last.
The prompt isn't where the thinking starts. It's where the thinking ends.
13:00 — Closing Thoughts
So when people talk about writing the perfect prompt, I sometimes think they might be focusing on the wrong step because the most useful thing AI gives me — and this is my personal opinion — isn't the first answer. It's the conversation that helps me arrive at that answer. It's the ability to think through an idea in conversation.
Those conversations, and any conversation, is rarely perfect. It's always messy. I love the back and forth. I tell my AI to don't just reinforce my idea, like push back on me. And been noticing it's been doing a lot more of that lately, which is really perfect for me. And that's where those interesting ideas show up.
Anyway, these are some of the thoughts I've been having around how I actually use this stuff and how I want to put it out there and just see — I know I'm doing it wrong, I burnt through tokens, I'm trying different models. I got messy agents set up. I'm spending a lot of time and a lot of effort. At this point, a lot of money. Just trying to — it's like a playground for me at the moment. And I'd rather be playing in it than ignoring it. And that's where I'm kind of at at this point.
And I'd love to see what other people feel about prompts. I really want to know how other people are using AI and talking to it, interacting with it, projects, code, free chats, anything — whatever people are doing, I'm very interested to know what's going on. So thanks for watching and see you next time.