Every week in my bootcamp, someone shares their screen and shows me a ChatGPT response that made them want to throw their laptop out the window.

Last month it was a senior manager from a logistics firm in Kwun Tong. She’d asked ChatGPT to “write a report on customer trends.” Back came four paragraphs of polished, confident, utterly useless MBA-speak. Phrases like “evolving consumer behaviours in the digital landscape” and “strategic alignment with market dynamics.” Nothing about her industry. Nothing about her customers. Nothing she could actually use.

She looked at me and said what I hear almost every week: “This AI is useless.”

I asked her one question: “If you gave this exact same brief to a new hire on their first day — someone brilliant, well-read, but who has never worked in your company — what would you expect back?”

She paused. Then she laughed. Because she already knew the answer.


Here’s the reframe I teach in week one of every programme I run: AI is not a magic oracle. It is a brilliant junior employee who knows absolutely nothing about your company, your clients, your industry context, or your goals — unless you tell them.

That junior employee might have read every business book ever written. They might speak five languages and have a first-class degree. But on day one, if you hand them a Post-it note that says “write a report on customer trends,” you will get exactly what you deserve: something generic, something safe, something that technically answers the question while being completely useless to you.

The tool is not the problem. The brief is.

Think about what it actually means to manage a junior well. A good manager doesn’t say “write a report.” A good manager says: “I need a two-page summary of our Q3 customer retention trends for the CEO presentation on Friday. Focus on the three customer segments with the highest churn. Use plain language — she doesn’t want jargon. Flag any segment where the trend reversed month-on-month and tell me why you think that happened.”

That brief takes about ninety seconds to write. The output it produces is categorically different. The same principle applies, word for word, to how you use AI.


So what does good AI management actually look like in practice?

It starts with context. Before you make any request, give the AI a clear picture of who you are and what the situation is. Are you a marketing manager at a mid-size retail bank in Hong Kong? Are you preparing a pitch for a client who has been loyal for eight years but is now shopping around? Say that. AI cannot infer what it has not been told, and it will never ask for clarification unless you specifically invite it to.

Then give it purpose. What is this output actually for? Who will read it, and what decision will it help them make? A summary written for a CFO looks different from the same summary written for a front-line team leader. A summary that needs to prompt action is different from one that needs to justify a decision already made. These are not subtle differences — they change everything about tone, structure, and emphasis.

Then give it constraints. How long? What format? What tone — formal, direct, conversational? What should it leave out? Constraints are not limitations. They are gifts. They give the AI something to aim at. When you say “keep it under three hundred words and avoid technical jargon,” you are not restricting the AI — you are freeing it from having to guess.

Finally — and this is the one most people skip — review and redirect. Do not accept the first draft as the final output. No one expects a junior’s first draft to be perfect. Read it like a manager. Ask yourself what is missing, what is wrong in tone, what needs to be cut. Then go back and tell the AI exactly that. Iteration is not a sign that the AI failed. It is the normal process of getting to something good.


Let me make this concrete. Here is a prompt I see constantly in workshops:

“Write a marketing email for our new product.”

Here is what that looks like when someone applies the framework above:

“You are writing on behalf of a Hong Kong-based private bank. We are launching a new FX hedging service aimed at SME clients who export to Europe. The email is going to existing clients who have been with us for at least two years. Tone should be warm but professional — these are long relationships. The goal of the email is to get them to book a 20-minute call with their relationship manager, not to explain the full product. Keep it under 200 words. Do not use the words ‘innovative’ or ‘cutting-edge.’ End with a single, clear call to action.”

The difference in what comes back is not marginal. The first prompt produces a template you could find on any marketing blog. The second produces something that sounds like it was written by someone who actually knows the client relationship, the product category, and the goal of the communication. It still needs editing — it always does — but it gives you something real to work with.

The work you do in the brief is the work you save in revisions.

Try this yourself. Take any task you’d normally give AI in one vague sentence. Rewrite it using this template, then compare the outputs:

You are helping a [your role] at a [industry] company in Hong Kong.

Situation: [Describe what's happening — the relationship, the context, what's at stake]

Goal: [What should the reader think, feel, or do after seeing this output?]

Deliverable: [Format, length, tone, and anything to avoid]

Here is what I need: [Your actual request]

Paste the vague version into ChatGPT first. Screenshot the result. Then paste the detailed version. The difference will be obvious inside 90 seconds.


Most people come to AI looking for a button they can press. They want to type something vague and receive something perfect. When that does not happen, they conclude that AI is overhyped, or that it works for other industries but not theirs, or that they are just not “tech people.”

None of that is true. What is happening is a mismatch in expectations.

The professionals who extract genuine value from AI are not the ones with the most technical knowledge. They are the ones who treat AI as a capable but inexperienced collaborator — and who invest the time upfront that makes the collaboration actually work. Two minutes on a proper brief routinely saves two hours of back-and-forth, revision, and frustration.

I watch this shift happen in almost every cohort I run. It usually lands in week two. Something clicks. People stop fighting the tool and start managing it. Once they make that switch, they do not go back.


In my experience working with corporate teams across Hong Kong, the biggest barrier to AI adoption is not technical literacy. It is expectation management. There is a widespread assumption that AI should somehow already know what you want — that context should be inferred, that intent should be read, that the tool should compensate for a vague brief.

The professionals who break through that barrier are, almost without exception, good managers and good communicators. They already know how to brief people clearly. They already understand that the quality of output depends on the quality of input. When I show them that the same instincts apply to AI, it is not a leap. It is a recognition.

If you are good at telling a junior what you need, you can be good at AI. That is the whole skill. Everything else is practice.


The AI In Action Bootcamp teaches this as a foundational skill in week one — not as a technical lesson, but as a management one. Teams of 5 to 10 are welcome, and the programme is designed specifically for Hong Kong professionals who want to use AI to do real work, not just experiment with it. If your team has started using AI tools but has not seen the results you expected, the brief is usually where we start.