The most common thing I hear before a workshop starts, usually while people are still finding their seats, is some variation of: “I tried ChatGPT and it gave me garbage.”

I heard it again a few months ago at HKU SPACE. A participant — a communications manager at a mid-sized firm — had asked ChatGPT to draft an email to a client. What came back was so generic it could have been written for anyone, about anything. She’d dismissed the whole tool as useless after that one attempt and hadn’t opened it since.

I asked her to read me the prompt she’d used. It was four words: “Write a client email.”

That’s not a prompt. That’s a wish. And the problem isn’t ChatGPT — it’s that we’ve all been trained to think that because AI sounds smart, it must also be psychic. It isn’t. It needs to be briefed, the same way a person does. Once I showed her how to use what I call the CPR Framework, she rewrote the prompt in about two minutes and walked out of the session with an email she actually sent that afternoon. That’s the moment I knew CPR was worth teaching properly.

What CPR Stands For

CPR is three things: Context, Purpose, Request. It sounds simple because it is. Most frameworks that promise to improve your prompting are either too complicated to remember or too vague to actually apply. CPR is short enough to recall mid-meeting and specific enough to change what you get back.

Context is everything ChatGPT needs to know about you and your situation before it can do good work. Think of it as the briefing you’d give a talented new hire on their first day. Who are you? What’s your role? What’s the relationship or situation that this task exists within?

Instead of saying “write an email,” you might say: “I’m a marketing manager at a retail chain in Hong Kong. I’m writing to a long-term client — we’ve worked together for three years — who just missed a payment by 30 days. The relationship is good but I need to address this professionally without damaging it.” That context completely changes what the AI will produce. It knows the stakes, the tone, and the relationship dynamic. You haven’t given it more words — you’ve given it the right words.

Purpose is the outcome you want, not just the task you’re assigning. This is the piece most people skip, and it’s probably the most important of the three. “Write a report” tells the AI what to produce. “Write a report that will convince my CEO to approve budget for a new CRM system” tells the AI what the report needs to accomplish. That single shift changes the tone, the structure, where the emphasis lands, what evidence gets foregrounded. Purpose is the difference between a document and an argument.

Request is the specific deliverable — format, length, style, constraints. Don’t leave this to interpretation. If you need a professional but warm email under 200 words, say that. If you want three options with different tones so you can pick the best one, say that too. The more concrete the request, the less the AI has to guess.

What This Looks Like in Practice

Here’s the difference between a typical prompt and a CPR prompt, using a scenario common in Hong Kong financial services. Imagine you’re a senior analyst who needs to update stakeholders on a delayed project.

Without CPR:

Write a stakeholder update email about a project delay.

What you get back will be a bland, five-paragraph template with placeholder text. It’ll technically be an email. You won’t send it.

With CPR:

I’m a senior analyst at an asset management firm in Hong Kong. I’m writing to a group of eight internal stakeholders — a mix of senior management and cross-functional team leads — about a two-week delay in our client onboarding system upgrade. The delay is due to a third-party vendor issue, not internal error, but stakeholders are already anxious about the timeline. The purpose of this email is to reassure them that we have a revised plan in place and that the final delivery date won’t shift further. Keep it under 250 words, professional in tone but not cold, and end with a clear next step and date.

That prompt takes sixty seconds to write. What comes back is something you’ll likely edit for five minutes and then send. That’s the trade-off CPR is asking you to make — a little more thought upfront, dramatically less rework on the back end.

Try it now. Open ChatGPT or Claude and paste this template. Fill in the brackets with your own situation — any real task you have this week:

Context: I am a [your job title] at a [industry] company in Hong Kong. [One or two sentences about the specific situation — who's involved, what the relationship is, what's at stake.]

Purpose: The goal of this output is to [specific outcome — what should the reader think, feel, or do after seeing it?].

Request: Please write a [document type — email / summary / proposal / report section]. It should be [length], in a [formal / direct / conversational] tone, and should not [anything to avoid]. [Any other formatting requirements.]

Use any real task from your inbox this week. Compare what comes back against what you’d have gotten from a two-sentence prompt. That difference is CPR.

Why It Works

AI models like ChatGPT are probabilistic. They pattern-match against vast amounts of text to predict what output fits best in response to your input. The more specific and detailed your input, the narrower the range of “good fits” the model is drawing from. Vague input opens the probability space to an enormous range of plausible responses — and most of them won’t be what you wanted.

But you don’t need to think about it in those terms. The simpler way to understand it: briefing an AI is exactly like briefing a new employee. A vague brief gets vague work. A detailed brief gets exactly what you need. The AI isn’t making decisions about what matters — you are, by what you include in the prompt. If you don’t tell it what matters, it guesses. And its guesses are average by design.

The Three Mistakes People Keep Making

The first is skipping Context because they assume the AI already knows their industry or role. It doesn’t carry assumptions between sessions. Every conversation starts from zero. You have to establish who you are and what world you’re operating in, every time.

The second is stating a vague Purpose. “Make it professional” or “make it good” aren’t purposes — they’re hopes. A real purpose is directional: what should the reader think, feel, or do after engaging with this output? If you can’t answer that, neither can the AI.

The third is making a Request without constraints. “Write an email” leaves format, length, tone, and audience entirely open. That’s too much ambiguity. Close it down. Even rough constraints — “around 150 words, formal but approachable, ending with a clear ask” — will dramatically tighten what comes back.

Why This Is the Starting Point, Not the Whole Game

At my AI In Action Bootcamp, students don’t leave a session unless they’ve built something they’ll actually use Monday morning. That’s a real constraint I hold myself to, because it’s too easy to run a workshop full of impressive demos that nobody applies when they’re back at their desk.

CPR is what makes that possible. Without a solid prompting foundation, every exercise becomes a frustrating guessing game. With it, people start producing real, usable outputs within the first hour — and then we can spend the rest of the time building on top of that. CPR is the starting point, not the ceiling. Once it becomes instinctive, you can start stacking techniques: role-assignment, chain-of-thought prompting, structured output formatting, and much more.

But none of that matters if you can’t write a good prompt first.

If you want to learn CPR and 50+ other prompting techniques in person, I run a 4-week AI In Action Bootcamp in Hong Kong. It’s designed for working professionals who want practical skills they can apply immediately — not a survey of AI trends, but actual tools for your actual job.

The communications manager who sent that client email? She came back for the full bootcamp two weeks later.