AI training is having a moment in Hong Kong. Every consulting firm, tech vendor, and independent freelancer seems to be offering an “AI workshop” right now. The pitch is always compelling: get your team up to speed on AI, fast.

The problem is that most of what’s being sold isn’t training. It’s a demo with a certificate at the end.

I’ve seen this pattern play out repeatedly. A company runs a full-day AI session. Employees sit through slides about ChatGPT features, maybe try a few prompts on their laptops. Everyone leaves feeling energized. Four weeks later, nobody has changed how they work. The tools that were supposed to save hours a week are sitting unused.

That’s not a failure of AI. That’s a failure of training design.

If you’re an L&D manager or an executive evaluating AI training for your team in Hong Kong, here’s what I’d actually look for — and what I’d walk away from.


5 things a good AI training program gets right

1. Participants produce something real, not just a certificate

The most reliable signal of good training is what participants walk away holding at the end. Is it a certificate, or is it something they’ll actually use?

The best programs end with each participant producing a tangible deliverable — a prompt library tailored to their job function, an automated workflow they built during the session, a report that took them four hours last week and now takes twenty minutes. If the final output is a printed certificate and a feedback form, ask harder questions about what the training actually produced.

2. Non-technical focus — no coding required

The vast majority of Hong Kong corporate teams don’t need to understand transformer architecture or fine-tuning. They need to know what to open on Monday morning and exactly what to type.

Look for programs that explicitly say “no coding required” and mean it. The best AI training treats the tools as professional instruments — the way you don’t need to understand how a spreadsheet calculates to use Excel effectively. If a trainer spends significant time on how large language models work under the hood, they’re probably filling time rather than building skill.

3. English-language instruction

This is a practical detail that matters more than people acknowledge. Hong Kong’s business environment is bilingual at best, Cantonese-dominant in most corporate settings. For multinational companies, regional teams, and executives whose work lives in English, English-language AI fluency — knowing how to prompt effectively, review outputs critically, and communicate results — is itself a competitive skill. Make sure the instruction and the materials actually support that.

4. Small cohort sizes

AI training that works requires participants to work on their own real tasks, not follow along with generic examples. That’s very hard to do in a room of forty people.

The most effective formats keep cohorts small — ideally five to ten people — so there’s room for participants to bring their actual work into the session. A participant who works in compliance at a financial institution should be building prompts for compliance tasks, not practicing how to write a vacation itinerary. If the session size doesn’t allow for that level of specificity, the learning won’t transfer.

5. Post-training evidence

Ask any trainer you’re considering a simple question: “What do your graduates do differently six weeks after the program?”

If they can’t answer that with specifics — real examples, time savings, workflow changes — they haven’t been measuring it. Trainers who run effective programs are obsessive about this question, because it’s the only thing that proves the training worked. Vague answers about “participants feeling more confident” should put you on alert.


Red flags worth walking away from

“We cover 15 AI tools in one day.” Breadth without depth doesn’t stick. A team that gets a fifteen-minute tour of fifteen tools will use none of them. Depth on two or three tools that fit your actual workflows is worth far more.

Generic examples throughout. If the demo tasks in a workshop are writing a cover letter or planning a vacation, the trainer hasn’t thought about your industry. Corporate professionals need to see their specific pain points addressed — drafting client reports, summarizing meeting notes, reviewing contracts, building data summaries. Generic examples signal a generic program.

No before/after workflow comparison. Good training can show you, concretely, how a specific task changed. If a trainer can’t walk you through a real example — this task took 45 minutes, it now takes 8, here’s exactly why — they haven’t measured what their training produces.

Feature-focused curriculum. “How to use ChatGPT” is not the same as “how to integrate AI into your weekly workflows.” Features change constantly; habits and frameworks persist. A curriculum organized around tool features rather than work outcomes is training for the demo, not for Monday morning.


What I do differently

My AI bootcamp is designed around a simple rule I call the 10/10 rule: every participant should leave each session with something they’ll use within the next ten days, and that saves them at least ten minutes per use.

The program runs over four weeks in small cohorts of five to ten people. Every session works on real tasks — participants bring their actual work, and we build prompts, workflows, and habits around that work. The curriculum is built around a framework I call CPR (Context, Prompt, Refine) that gives participants a repeatable method they can apply to any task, not just the ones we cover in class.

I’ve run this for teams at Allianz and through HKU SPACE, among others. The metric I care about isn’t participant satisfaction scores. It’s whether their workflows actually changed.


Questions to ask any trainer before you book

Before signing off on any AI training program, I’d ask these directly:

  • “Can you show me a before/after comparison of a real participant’s workflow?”
  • “What do you measure to determine whether the training worked?”
  • “What’s your plan if my team isn’t using the tools two weeks after training?”
  • “Is this content tailored to our industry, or is it the same program you run for everyone?”

The answers will tell you quickly whether you’re talking to someone who has thought seriously about behavior change, or someone who has built a good slide deck.

If you need to make the case internally first, here’s a prompt to help you draft the business case for your own manager or leadership team:

Context: I am a [your role — e.g., L&D Manager, Head of Operations] at a [industry] company in Hong Kong with approximately [team size] employees. I want to propose a structured AI training programme for a team of [number] people. The team's main work involves [briefly describe their day-to-day — e.g., "client reporting, stakeholder communication, and proposal writing"].

Purpose: I need to convince [your decision-maker — e.g., "my CFO," "the regional HR director"] that investing in AI training will produce measurable productivity gains, not just awareness.

Request: Write a concise internal proposal (under 400 words) that:
1. States the business case in terms of time saved per person per week
2. Outlines what a good programme looks like (format, duration, cohort size)
3. Describes how success would be measured
4. Ends with a clear recommended next step

Tone should be professional and direct. Avoid jargon. Lead with ROI.

Adjust the numbers to your situation and you’ll have a first draft to work from in under three minutes.


AI training in Hong Kong doesn’t have to be expensive theatre. But the default — a one-day session, a guest speaker, a certificate — usually is. Hold the programs you evaluate to a higher standard, and you’ll get meaningfully different results.

If you’d like to discuss whether my bootcamp is a fit for your team, here’s more information.