Most “prompt engineering” advice is about writing the perfect incantation that gets you exactly what you want in one shot. When you’re using AI at work to get things done (rather than writing a prompt to be built into an AI product), that’s entirely the wrong goal. What you should be thinking about is Conversational Prompting.
If you watch over the shoulder of an AI-assistant power user work through a complex creation task (like outlining a presentation), it should be crystal clear that there’s much more to effective AI collaboration than a perfect prompt. Instead, it’s much more about workflow, orchestration, and conversation. And doing all that is actually much simpler and more approachable than “prompt engineering” anyway!
To write this article, I screen-recorded myself for an hour working through turning a mass of unfiltered (but reasonably well-structured) context from multiple sources (colleagues) into an outline for a presentation. I then reviewed the recording and transcript (both on my own and with the help of multiple AI assistants – so meta, right?) to capture my actual workflow and not some boring best-practices regurgitation of the AI assistant docs you can (and should) read yourself.
Setting the scene, this is my workflow for situations where I’m working on a complex and important written deliverable. This full process is overkill for many situations, so in those cases, pick and choose the techniques to apply rather than following the whole workflow. A vital ingredient here is that the majority of the raw value needs to exist already. If you’re trying to craft a deliverable where the AI does all the work, and the only context you’re providing is requirements… you’re not adding a whole lot of value, and the quality of your result is capped.
Conversational Prompting in Action
Beyond my 8 essential techniques for conversational prompting – in a nutshell – here’s how I approach conversational prompting with AI at work:
- Write the prompt rather than engineering it – think carefully about the prompt, be thorough, and provide loads of context, but don’t get caught up in over-engineering it since you’ll be able to chat your way through it. Be sure to tell the model to ask you follow up questions to elicit additional context you might have forgotten to include.
- Consider meta-prompting – Using either Anthropic’s prompt improver or a simple “” in a separate chat can be a great source of inspiration for improvement and iteration. Often, this will add a bunch of new constraints and rules, some of which you might actually like. It can also help disambiguate your language (which would be especially useful if you’re not so much a writer).
- Context engineering – Curate and structure the information you have to be best used by the AI, don’t just dump it all in using whatever format it’s already in. In this case, the context is in a Google sheet grouped into two spreadsheets. I tried leveraging the Google Drive integration to pull it in natively, but that didn’t work. These assistants don’t have the best luck with Excel files (since they’re a very complicated file format), so I exported each sheet as its own CSV and gave it a descriptive name.
- Take multiple shots on goal – try multiple models to get a diversity of “opinions” and if you’ve got ideas for different prompts, try them, too. This is one of the highest leverage aspects of this workflow. Simply doing the work of asking multiple models is beyond what most people will think of doing, let alone bother actually to do.
- Have the model elicit context, then propagate it – when you provide additional context to a model by answering its follow-up questions, answer once and then copy and paste the valuable context you’re providing (question & answer style) into your other chats, too.
- Conversational refinement – use the conversation to shape your result, don’t go back to the drawing board and start from scratch over and over
- Look for patterns & strokes of brilliance – look for recurring patterns across your conversations and consider whether that makes it a good or boring takeaway, you’re also looking for the strokes of brilliance that show up like a needle in the haystack
- Think about sources – As you see what the models came up with, connecting the dots back to your source material will help you both fact-check the interpretation and bring bigger ideas together. This also helps with context debugging if the model is misinterpreting the context you’re providing. So think twice before you blindly dump context into a prompt without at least reviewing it (but ideally groking it)
- Spend your time on synthesis – In this hour-long session, I only scratched the surface of synthesis, even though I probably spent half my time on it. I would have easily spent another 2 hours editing, refining, curating, and pulling together the outputs into a single polished outline. It’s okay to leverage an LLM, but resist the temptation to blindly delegate the thinking to AI. At the very least, go through each result and cut things that aren’t up to scratch before providing it to a new thread for synthesis
- Look for both nuggets & goldmines – You’re looking for value at every level of granularity. It could be overall positioning and hook, it could be a single section, it could be a paragraph, or it could be a 2-5 letter phrase that captures an idea elegantly. At each of these, you’ll find some things that you can take wholesale, others that need a little tweak, others that you can extend/merge/enhance, and others that you need to totally rewrite, but that gave you a rush of inspiration.
- To get really meta, coming up with this point was an example in itself. The model proposed “look for nuggets, not novels”. I liked the nuggets, but not at the expense of the bigger picture, so I dropped that and reframed it as about looking for both small and big. And because the model had suggested nugget, I noticed the opportunity to extend the metaphor and use goldmines to convey that there are big picture things to look at, too!
This isn’t to say this is the perfect or optimal way to get things done with AI, but it’s how I would have done it because it’s literally how I just did it for a real work task.
The actual best approach to conversational prompt engineering is the one that gets you the results you need, refined through practice and adapted to your specific context. It’s not about following a rigid formula. It’s about developing a toolbox of techniques and an intuition for what works, and then being willing to iterate until you get what you need.
Most of using AI assistants at work isn’t about crafting the perfect prompt. It’s about orchestrating productive conversations with AI systems, recognizing patterns in their outputs, and skillfully combining multiple perspectives into something greater than the sum of its parts.