Skip to main content
Back to BlogAI Basics

Why Most AI Training Is Useless

NT
Nerdsmith Team
9 min read
Share:

The RM15,000 Workshop That Changed Nothing

A manufacturing company in Penang spent RM15,000 on a two-day AI workshop for their management team last year. Fourteen people attended. They learned about large language models, neural networks, transformer architecture, and the history of artificial intelligence from ELIZA to GPT-4. Three months later, not a single person in that company was using AI in their daily work. Zero. The workshop facilitator had been excellent — engaging, knowledgeable, well-reviewed on the training platform. The slides were polished. The venue was nice. Lunch was good. But the training was useless. Not because of the facilitator or the content quality. Because of what the training did not do: teach anyone how to apply AI to the specific work sitting on their desk the next Monday morning. This is not an isolated story. I hear versions of it every month from business owners across Southeast Asia. They invest in AI training, their staff sit through it politely, and then everyone goes back to doing things exactly the way they did before. The money is spent. The impact is zero.

Problem 1 — Too Much Theory, Not Enough Practice

Most AI workshops spend 70 to 80 percent of the time on concepts. What is AI. How does it work. What are the ethical implications. What is prompt engineering in the abstract. These are interesting topics for a university lecture. They are almost worthless for a marketing manager who needs to write 12 product descriptions this week or an operations lead who spends three hours a day on email. The ratio should be flipped. Spend 20 percent of the time on concepts — enough for people to understand what the tool can and cannot do — and 80 percent on hands-on practice with tasks from their actual job. When someone types a real email they need to send into Claude and watches it generate a solid first draft in 10 seconds, they do not need a lecture on transformer architecture to understand the value. The understanding comes from doing, not from slides.

Problem 2 — Generic Content for Specific Roles

A typical AI workshop puts the finance team, marketing team, operations team, and HR team in the same room and teaches them the same material. This is efficient for the training provider but terrible for the learner. The finance manager does not care about generating social media captions. The marketing lead does not care about reconciling expense reports. When the examples are generic, everyone mentally checks out during the parts that are not relevant to them — which is most of the workshop. Effective AI training is role-specific. A session for the finance team should use financial documents, accounting terminology, and reporting workflows. A session for marketing should use campaign briefs, customer personas, and content calendars. Same underlying tool, completely different application. This takes more preparation from the trainer. It means understanding each team's actual workflow before the session, not just showing up with a standard deck. Most training providers do not do this work because it does not scale. But scaling is the trainer's problem, not the learner's.

Problem 3 — No Follow-Up After the Workshop

Here is a statistic that should bother every training buyer: research on corporate training retention consistently shows that people forget 70 percent of what they learned within a week if there is no reinforcement. A two-day workshop without follow-up is essentially paying for a temporary experience. People leave energized. They intend to use AI. Then Monday hits, deadlines pile up, and the new skill they barely practiced gets pushed aside by the old habits that are faster and more familiar. Follow-up does not have to be expensive or complex. A weekly 15-minute check-in for the first month. A shared channel where people post their AI wins and ask questions. A simple tracking sheet where each person logs one task they used AI for this week. The workshop is the ignition. Follow-up is the fuel. Without fuel, the engine dies within days.

Problem 4 — Training the Wrong People First

Many companies start AI training with senior management. The logic seems sound — get leadership buy-in first, then cascade down. In practice, senior managers are often the worst first cohort. They have the least repetitive work. They have assistants handling their email. They attend meetings all day. The tasks where AI adds the most immediate value — drafting, summarizing, processing, researching — are concentrated in mid-level and operational staff. Start with the people who will see immediate time savings. When the operations coordinator saves two hours a day on report formatting, that story spreads faster than any top-down mandate. When the sales admin uses AI to prepare meeting briefs in five minutes instead of forty-five, other teams start asking how they can get the same training. Peer-driven adoption beats executive mandates every time. Train the doers first. The executives will follow when they see the results.

What Actually Works

After running training sessions for companies ranging from 8-person agencies to 200-person manufacturers, here is what I have seen work consistently: Hands-on from minute one. Participants open ChatGPT or Claude within the first 10 minutes of the session. They type a real task from their own job. They see results immediately. Everything else — concepts, best practices, limitations — gets taught in context, not in the abstract. Role-specific exercises. The trainer studies each team's actual workflow before the session. Every exercise uses real documents, real email threads, real reports from the participants' own work. When someone sees AI drafting a version of the report they spent three hours on last Tuesday, the lightbulb goes on permanently. Accountability after the session. Participants commit to using AI for one specific task per week. They report results — what worked, what did not, what questions came up. This can be a simple group chat or a quick weekly standup. The format matters less than the consistency. Measurement. Before training, log how long key tasks take. After training, log them again. When you can say "our team saved 47 hours last month on report drafting," the training stops being a cost center and starts being an investment with visible returns. Small groups. Eight to twelve people maximum per session. Large workshops mean passive learning. Small groups mean everyone participates, everyone gets their questions answered, and nobody hides in the back row checking their phone.

The Hard Truth for Training Buyers

If you are shopping for AI training for your company, here are the questions that separate useful programs from expensive experiences: Will participants use AI on their actual work tasks during the session? If the answer is no — if it is all slides and demos — skip it. Is the content tailored to our specific roles and workflows, or is it a generic program? If the trainer has not asked about your team's daily work before the session, they are not going to address it during the session. What happens after the workshop? If there is no follow-up plan — no check-ins, no practice assignments, no measurement — you are paying for a one-time event with a short shelf life. Can we measure results? If the training provider cannot help you set up a before-and-after comparison, they are not confident their training will produce measurable change. That tells you something. Good AI training is not cheap. But it pays for itself within weeks through measurable time savings. Bad AI training is not cheap either — it just costs you money and changes nothing. The difference is not the AI. It is the training design.

Want more practical AI tips?

Join 5,000+ people learning to use AI in their everyday lives. One useful tip every week, no jargon, no hype.

No spam. Unsubscribe anytime.