Skip to main content
Back to BlogComparisons

What Happened When I Gave My Team AI for a Month

NT
Nerdsmith Team
10 min read
Share:

The Setup

Our company is a professional services firm with 12 full-time staff. Three client managers, two analysts, a finance person, two project coordinators, a marketing coordinator, an office admin, and two junior consultants. We are based in KL and serve clients across Malaysia and Singapore. In January, I decided to run a structured experiment. Every staff member got a paid ChatGPT Plus account and a paid Claude Pro account — total cost RM3,360 for the month. I gave them a simple brief: use these tools for any work task you think they might help with. Log what you try. Log what works and what does not. We would review together at the end of each week. I did not mandate usage. I did not set targets. I wanted to see what would happen organically when a regular team had access to AI tools with no pressure and no restrictions. Here is what happened, week by week.

Week 1 — Confusion and Curiosity

The first week was messy. Eight of the twelve people had never used ChatGPT or Claude before. They did not know what to ask. They did not know what was possible. The most common first interaction was some version of "Hello, what can you do?" — which is about as useful as calling a new hire and asking them the same question. AI answers that question with a long generic list, and the user walks away thinking "okay, but what does that mean for me?" Three people — both junior consultants and the marketing coordinator — took to it immediately. They were already curious about AI and started experimenting within the first hour. By Friday, the marketing coordinator had used Claude to draft three client newsletter editions and was genuinely excited about the time savings. The finance person tried it once on Tuesday, got a response she felt was too vague, and did not open it again until I asked about it on Friday. The office admin was openly skeptical. She told me, and I quote, "I have been doing this job for nine years without a robot and I am not about to start now." Five people were in the middle — mildly interested, slightly confused, waiting to see if this was a real initiative or something that would quietly fade away like most new tools management introduces. End of Week 1: 3 active users, 4 occasional users, 5 barely touched it.

Week 2 — The Early Adopters Show Results

Week 2 is when the early adopters started producing visible results that the rest of the team could not ignore. One of the junior consultants used AI to prepare client meeting briefs. She normally spent about 45 minutes per brief — reading past correspondence, summarizing key issues, listing open action items. With AI, she got the first draft in about 8 minutes, then spent another 10 editing for accuracy. Her briefs were noticeably more thorough than before because the AI caught details from old emails that she would have skimmed past. The marketing coordinator built a system for repurposing content. She would feed a long client case study into Claude, then ask it to produce a LinkedIn post, an email summary, three key takeaways, and a one-paragraph client-facing version. What used to take her a full morning was done by 10 AM. These wins were visible. People noticed. During our Friday review, the five middle-ground staff asked more questions than the previous week. Two of them tried specific tasks over the weekend. The finance person remained inactive. The office admin tried it once more — for drafting a vendor email — and grudgingly admitted the draft "was not terrible." End of Week 2: 5 active users, 4 occasional users, 3 inactive.

Week 3 — Real Wins and Real Failures

Week 3 was when things got interesting. Both the wins and the failures became significant. The biggest win came from a client manager who used AI to analyze feedback from a 40-person stakeholder survey. He pasted the raw responses into Claude and asked it to identify themes, rank them by frequency, and draft a summary report. A task that normally took him a day and a half was done in three hours. The quality was solid — I reviewed the report myself and the analysis was accurate. Another client manager started using AI to draft scope-of-work documents. She built a prompt template that included our standard structure, and AI filled in the specifics based on client requirements she provided. First drafts were about 80 percent there, and she saved roughly two hours per document. Now the failures. The junior consultant who was excelling at meeting briefs tried to use AI to write a technical analysis section of a client deliverable. The output was confident but contained errors in the methodology. She submitted it without thorough review, and the client caught the mistake. Not a catastrophic error, but embarrassing. It was a clear lesson: AI handles factual summarization well but struggles with specialized technical analysis. The project coordinators tried using AI for scheduling and resource allocation. The results were poor. AI does not have visibility into real calendar conflicts, team preferences, or the unwritten rules about which staff work well together. They abandoned that use case. End of Week 3: 8 active users, 2 occasional users, 2 inactive.

Week 4 — Measurable Results

By Week 4, the team had settled into patterns. Everyone knew what AI was good at and what it was not. Here are the hard numbers from the final week: Client meeting briefs: average preparation time dropped from 45 minutes to 18 minutes across the three people who adopted this workflow. That is 3 to 5 briefs per week per person — roughly 4 hours saved per person per week. Email drafting: the team collectively estimated saving about 6 hours per week on client correspondence. Not because AI wrote perfect emails, but because starting from a draft is significantly faster than starting from nothing. Report first drafts: two client managers reported saving 2 to 3 hours per report. We produce about 6 to 8 reports per month. Content repurposing: the marketing coordinator cut her content production time by roughly 40 percent. Survey and feedback analysis: when applicable, saving about 4 to 6 hours per analysis versus manual processing. Total estimated time saved across the team in Week 4: approximately 35 to 40 hours. For a 12-person company, that is meaningful. The finance person never became an active user. She tried it a few more times but said her work was too numbers-heavy and precise for AI to add value. She was probably right — her tasks scored low on AI-readiness. The office admin started using it for drafting vendor communications and meeting agendas but remained vocally unimpressed. "It is fine," she said. Coming from her, that was practically a glowing endorsement.

What I Got Wrong

My biggest mistake was not providing role-specific guidance from day one. The five people who stalled in Week 1 would have started faster if I had given them three concrete use cases tailored to their specific job. "Try AI for X, Y, and Z this week" is a much better starting point than "try it for anything." My second mistake was underestimating the quality control risk. The technical analysis error in Week 3 was preventable. We should have established a clear rule from the start: AI-generated content in client deliverables requires the same review process as any other draft. It is not an output you can trust without checking. Third, I should have paired the skeptics with the early adopters. The office admin might have engaged earlier if she had seen the marketing coordinator's workflow in action instead of hearing about it secondhand in a Friday meeting.

Would I Do It Again?

Absolutely. The RM3,360 cost paid for itself within the first two weeks through time savings alone. We are now three months in and AI usage is a normal part of how the team works. Ten of twelve staff use AI tools at least a few times per week. The two who do not — the finance person and one project coordinator — have roles where the current AI tools genuinely do not add much value. The biggest lasting change is not the time savings, though those are real and ongoing. It is the shift in how people approach their work. Before the experiment, tasks like report drafting and email writing felt like fixed-time obligations — they take however long they take. Now the team treats those tasks as processes they can optimize. That mindset shift is worth more than the hours saved. If you are thinking about doing something similar, here is my advice: do not overthink it. Set a budget, give people access, provide some role-specific starting points, establish quality review rules, and check in weekly. The team will figure out the rest. Some people will take to it immediately. Others will take weeks. A few might never engage. All of that is normal. The worst outcome is not that some people do not use it. The worst outcome is never trying and watching your competitors figure it out first.

Want more practical AI tips?

Join 5,000+ people learning to use AI in their everyday lives. One useful tip every week, no jargon, no hype.

No spam. Unsubscribe anytime.