Skip to main content
Curated AI Intelligence

The Radar

We track everything in AI so you don't have to. Video breakdowns, tool reviews, and hype checks — curated by people who actually build with this stuff.

AllVideo BreakdownsTool ReviewsHype ChecksWeekly Roundups
Video BreakdownMarch 2026

What Mo Gawdat Actually Said About AI (And What He Got Wrong)

The Diary of a CEO episode that scared half the internet. We watched it so you don't have to. Here's what matters.

Read article
Tool ReviewMarch 2026

Claude vs ChatGPT for Real Business Tasks — An Honest Comparison

We tested both on 10 actual business scenarios. The results might surprise you.

Read article
Hype CheckMarch 2026

AGI by 2027? What 50 AI Researchers Actually Say

Everyone has a timeline. We mapped 50 expert predictions. Here's the honest picture.

Read article
Video BreakdownComing soon

Mustafa Suleyman on Why AI Changes Everything

The Microsoft AI CEO's DOAC appearance, dissected. Bold claims, real implications.

Weekly RoundupComing soon

What Actually Mattered in AI This Week

Model releases, tool launches, and industry shifts — filtered through what matters for your business.

Tool ReviewComing soon

We Tested 5 AI Presentation Tools. Here's the Verdict.

Gamma, Beautiful.ai, Tome, Canva AI, and SlidesAI walk into a bar...

Video Breakdown 6 min readMarch 2026

What Mo Gawdat Actually Said About AI (And What He Got Wrong)

Mo Gawdat's appearance on The Diary of a CEO is one of the most-watched AI interviews on YouTube, with tens of millions of combined views across clips and the full episode. Former Chief Business Officer at Google X, author of Scary Smart, and a man who speaks about artificial intelligence with the urgency of someone reporting a house fire.

The interview scared a lot of people. It was designed to. And buried under the fear, there are some legitimate points worth pulling out — alongside a few claims that don't hold up when you look at the evidence.

Here's what Gawdat actually said, what he got right, and where the argument falls apart.

The Core Claims

Gawdat's thesis rests on a few key pillars. First, that AI will surpass human intelligence by 2029 — not in one narrow domain, but broadly. Second, that when this happens, most human jobs become redundant. Third, that there's a meaningful chance this doesn't end well for our species, particularly if we fail to instil “good values” into these systems before they outpace us. And fourth, that the window to act is closing fast, possibly already closed.

He delivers these claims with conviction and genuine emotion. He talks about his late son, about purpose, about the responsibility we carry. It's compelling television. The question is whether it's accurate prediction or well-intentioned catastrophising.

What He Got Right

AI is advancing faster than most people expected. This is inarguable. In 2020, GPT-3 could barely write a coherent paragraph. By 2024, AI systems were passing bar exams, writing production code, and generating photorealistic video from text prompts. The pace of improvement has surprised even researchers inside the labs building these systems. Gawdat's general point — that people are underestimating the speed — is correct.

Job displacement is real and already happening. Not in the dramatic “robots replace everyone” way, but in specific, measurable ways. Translation agencies have lost revenue. Junior copywriting roles have shrunk. Customer service teams are being restructured around AI-first workflows. Coding assistants are changing what it means to be a junior developer. Gawdat is right that this isn't hypothetical — it's underway.

Safety concerns are legitimate. The alignment problem — ensuring AI systems do what we actually want them to do — is a real technical challenge. It's not science fiction. Researchers at every major lab acknowledge this. The question isn't whether safety matters; it's how urgent the timeline is and what approach actually works.

Where It Falls Apart

The 2029 timeline is a guess, not a forecast. Gawdat presents a specific year with the confidence of someone reading a train schedule. But no one in the field — not the optimists, not the pessimists — can predict when artificial general intelligence arrives with that precision. The honest answer is: we don't know. Making specific year predictions sounds authoritative but it's unfalsifiable speculation dressed up as analysis.

The “all jobs” framing misrepresents how technology works. AI doesn't replace jobs wholesale. It replaces tasks. A financial analyst who used to spend 20 hours building spreadsheet models now spends 4 hours doing it with AI assistance — but they still need to interpret the results, understand the client's situation, and make judgment calls. The “all jobs disappear” narrative makes for gripping content but ignores how technology adoption actually plays out in organisations.

The doom framing lacks nuance. Gawdat presents a binary: either we solve alignment perfectly and everything is fine, or we fail and civilisation is at risk. This skips over the vast middle ground where AI systems are powerful but bounded, useful but imperfect, transformative but manageable. Most technology falls into this middle ground. AI almost certainly will too.

What He Missed Entirely

The biggest gap in Gawdat's argument is the concept of AI as a tool amplifier rather than a human replacement. The most successful AI deployments right now aren't replacing people — they're making people dramatically more effective. A marketing team of three that can now produce the output of a team of ten. A solo developer shipping what used to require a five-person squad. A consultant who can research, analyse, and present in hours instead of weeks.

This “augmentation” path doesn't make for scary podcast clips, but it's what's actually happening in the majority of businesses adopting AI. And it leads to a very different set of implications than the replacement narrative suggests.

Gawdat also misses something more fundamental: most businesses are still struggling with basic AI adoption. While he's talking about superintelligence and existential risk, the average company is trying to figure out how to get their sales team to use ChatGPT consistently. The gap between frontier AI capabilities and actual business adoption is enormous, and it's not closing as fast as the technology is advancing.

The NerdSmith Take

Gawdat raises real concerns but wraps them in fear. The practical reality? AI is changing work, but it's changing it task by task, not job by job. The displacement is real but it's gradual and uneven, not the overnight extinction event the interview implies.

Your job isn't to panic — it's to learn which tasks AI handles well and which still need you. That's not a terrifying proposition. It's a practical one. And it's exactly the kind of work that makes you more valuable, not less.

Watch the interview if you want to. It's genuinely interesting. Just don't let it paralyse you. The best response to “AI is coming” has never been fear. It's competence.

Tool Review 7 min readMarch 2026

Claude vs ChatGPT for Real Business Tasks — An Honest Comparison

Every AI tool comparison you've read follows the same formula: list features, compare pricing, declare a winner. None of them tell you what actually matters — which tool performs better when you sit down to get real work done.

So we tested both. Not on benchmarks or party tricks, but on 10 tasks that business owners, consultants, and managers actually do every week. We used Claude (Anthropic) and ChatGPT (OpenAI), both on their latest available models via paid plans, with identical prompts for each task.

Here's what happened.

The 10 Tasks — Results at a Glance

TaskWinner
1. Drafting a client proposalClaude
2. Summarising a 20-page reportTie
3. Writing marketing copyChatGPT
4. Analysing financial dataClaude
5. Customer email responseTie
6. Creating a meeting agendaTie
7. Strategic brainstormingClaude
8. Social media contentChatGPT
9. Code assistanceClaude
10. Research synthesisClaude

Final tally: Claude 5, ChatGPT 2, Tie 3.

Task-by-Task Breakdown

1. Client Proposal — Claude wins. We asked both to draft a consulting proposal for a mid-size company implementing AI workflows. Claude produced a structured, professional document with clear scope boundaries, assumptions, and pricing rationale. ChatGPT's version was competent but more generic — it read like a template. Claude's read like something you'd actually send.

2. Report Summarisation — Tie. Both handled a 20-page market research report well. Claude was slightly more concise. ChatGPT included a few more specific data points. Neither made errors. For this task, both tools are genuinely good enough.

3. Marketing Copy — ChatGPT wins. We asked for landing page copy for a B2B SaaS product. ChatGPT's output was more energetic, punchier, and had better hooks. Claude's version was polished but played it safer. For copy that needs to grab attention, ChatGPT has an edge — though you'll sometimes need to dial back the enthusiasm.

4. Financial Analysis — Claude wins. Given a profit-and-loss statement with some unusual line items, Claude flagged assumptions, noted anomalies, and was upfront about what the numbers might or might not mean. ChatGPT gave a competent analysis but didn't flag the same caveats. When accuracy matters more than speed, Claude's caution is a feature.

5. Customer Email — Tie. Both wrote professional, empathetic responses to a complaint email. Marginal differences in tone. Either would work.

6. Meeting Agenda — Tie. Both produced clean, logical agendas for a quarterly strategy review. This is a task where AI tools have been good for over a year. No meaningful difference.

7. Strategic Brainstorming — Claude wins. We asked both to brainstorm go-to-market strategies for an AI training company entering the Malaysian market. Claude explored second-order effects, challenged assumptions in the prompt, and offered a few non-obvious angles. ChatGPT produced a solid list but stayed surface-level. For thinking that goes deeper than “here are 10 ideas,” Claude is consistently better.

8. Social Media Content — ChatGPT wins. We asked for a week of LinkedIn posts about AI adoption. ChatGPT's posts were snappier, used better hooks, and had stronger calls to action. Claude's were more thoughtful but too long for the format. Short-form content that needs to stop the scroll? ChatGPT.

9. Code Assistance — Claude wins. Both wrote functional Python scripts for a data processing task. The difference was in explanation. Claude explained why it chose certain approaches, flagged edge cases, and offered alternatives. ChatGPT gave working code with less context. If you're learning or debugging, the explanation matters.

10. Research Synthesis — Claude wins. Given five conflicting articles about AI regulation, Claude produced a synthesis that acknowledged nuance, identified where the disagreements actually were, and avoided false balance. ChatGPT's summary was accurate but flatter — it listed points without analysing the tensions between them.

The Verdict

Claude for thinking. ChatGPT for doing. When the task requires depth, nuance, careful reasoning, or working through complexity, Claude consistently performs better. When the task requires energy, speed, punchy output, or high-volume content, ChatGPT has an edge.

Both are good. Both are worth the $20/month if you use them regularly. The best tool is the one you actually use — and the real power move is using both, each for what it does best.

Price Comparison

PlanClaudeChatGPT
FreeLimited daily messagesLimited GPT-4o access
Pro / Plus$20/mo$20/mo
Team / Business$30/mo per seat$25/mo per seat

The NerdSmith Recommendation

Start with the free tiers of both. Use them for a week on your actual work, not toy examples. You'll feel the difference quickly.

Use Claude when you need depth — proposals, analysis, strategy, anything where getting it right matters more than getting it fast. Use ChatGPT when you need speed and energy — social content, marketing copy, quick drafts you'll heavily edit anyway.

If you can only pay for one? It depends on your work. If you spend more time thinking and analysing, Claude. If you spend more time creating and publishing, ChatGPT. But honestly, $40/month for both is the best ROI in business software right now.

Hype Check 7 min readMarch 2026

AGI by 2027? What 50 AI Researchers Actually Say

Depending on who you listen to, artificial general intelligence — AI that can do anything a human can do, at the same level or better — is either arriving next year or never. The predictions span decades and the confidence levels are wildly inconsistent.

So we mapped what 50 of the most prominent voices in AI research and industry actually believe, based on their public statements, papers, and interviews. Not what headlines say they said. What they actually said.

The picture is more complicated, more honest, and more useful than any single prediction.

Camp 1: Before 2030

Sam Altman, Dario Amodei, Demis Hassabis, Jensen Huang, Elon Musk

This camp is dominated by people building AI systems, running AI companies, or selling AI hardware. That doesn't automatically disqualify their views, but it's important context.

Sam Altman has repeatedly suggested AGI could arrive by the late 2020s and that it might be “less of a big deal than people think.” Dario Amodei, CEO of Anthropic, has talked about “powerful AI” arriving within a few years that could transform science and medicine. Demis Hassabis of Google DeepMind has suggested we could see AGI-level capabilities by 2030. Jensen Huang, CEO of NVIDIA, has placed it around 2028-2029.

The common thread: these predictions tend to come with caveats that get stripped out by headlines. Amodei talks about “powerful AI” rather than AGI specifically. Hassabis qualifies what capabilities he means. Altman has shifted his timeline multiple times. They're saying “something transformative is close,” not necessarily “full human-level AGI in three years.”

Camp 2: 2030 to 2050

Andrew Ng, Sundar Pichai, Fei-Fei Li, Rodney Brooks

This group believes AI will be profoundly significant but sees the timeline as longer and the progress as more gradual than Camp 1 suggests.

Andrew Ng, co-founder of Google Brain and Coursera, has consistently argued that while AI is powerful, the path to AGI is longer than hype cycles suggest. He focuses on practical AI deployment rather than AGI speculation. Sundar Pichai speaks about AI as “the most profound technology humanity will work on” but has been careful about timelines. Fei-Fei Li emphasises the gap between narrow AI excellence and general intelligence. Rodney Brooks, robotics pioneer and former director of MIT CSAIL, has been publicly tracking failed AI predictions for years and consistently argues the field overpromises on timelines.

Their position: the technology is real, the impact will be enormous, but the jump from “very good at specific tasks” to “generally intelligent” is harder than scaling up current approaches.

Camp 3: Much Later, or Never (As Currently Defined)

Yann LeCun, Gary Marcus, François Chollet, Melanie Mitchell

This is the camp that gets the least media attention because “we don't know and current approaches probably aren't enough” doesn't make a good headline.

Yann LeCun, chief AI scientist at Meta and a Turing Award winner, has argued publicly and repeatedly that large language models are a dead end for AGI. He believes we need fundamental new architectures — specifically what he calls “world models” — and that we're missing key pieces of the puzzle. Gary Marcus, cognitive scientist and persistent AI critic, has placed substantial bets that AGI won't arrive by various proposed deadlines. François Chollet, creator of the Keras deep learning framework, has argued that current AI systems are not showing signs of general intelligence and that benchmarks are being gamed. Melanie Mitchell, complexity researcher at the Santa Fe Institute, has written extensively about how AI systems appear more intelligent than they are.

Their argument isn't that AI is unimpressive. It's that the jump from pattern matching at scale to genuine understanding and reasoning may require breakthroughs we haven't had yet — and that calling current systems “almost AGI” misunderstands what intelligence actually is.

Camp 4: The Timeline Doesn't Matter

Geoffrey Hinton, Eliezer Yudkowsky, Stuart Russell, Max Tegmark

This group sidesteps the “when” question entirely and focuses on “what happens if.” Geoffrey Hinton, the “Godfather of AI,” left Google specifically to speak freely about risks. His concern isn't about a specific date — it's that we don't have reliable methods to control systems that are smarter than us, and we should figure that out before we need to.

Eliezer Yudkowsky, who has been writing about AI risk since the early 2000s, argues that the specific timeline matters less than the fact that we have no proven alignment solution. Stuart Russell, professor at Berkeley and author of the standard AI textbook, has argued for fundamentally rethinking how we build AI systems to be inherently safe. Max Tegmark focuses on existential risk governance.

Their position: whether AGI arrives in 2028 or 2058, the safety work needs to happen now, and the current approach of “build first, align later” is reckless.

The Honest Picture

Nobody knows. That's the only honest answer. The people building AI say “soon” — but they have financial incentives to maintain urgency and excitement. The people studying intelligence say “maybe never with current methods” — but paradigm shifts can happen fast. The people worried about safety say “the timeline doesn't matter” — and they may have the strongest point of all.

What is clear: expert opinion is genuinely divided, and anyone presenting a single confident timeline is selling something — whether that's a product, a book, a worldview, or attention.

What This Means for You

If you're a business owner, a manager, a professional trying to figure out how AI affects your career — the AGI timeline is irrelevant to your next 12 months.

Here's what is relevant: AI tools available today can save you 5 to 15 hours per week if you know how to use them. They can handle first drafts, data analysis, research synthesis, customer communications, and routine coding tasks. They cannot replace your judgment, your relationships, your domain expertise, or your ability to navigate ambiguity.

Whether a superintelligence arrives in 2027 or 2047, the smart move right now is the same: learn to work with the AI tools that exist, understand their limitations, and focus on the skills that remain distinctly human — critical thinking, leadership, creativity under constraints, and the ability to ask the right questions.

The NerdSmith Take

We're agnostic on AGI timelines. Not because the question doesn't matter — it does, deeply, for policy and safety research. But because for the people we work with, the practical question is simpler: how do I use AI effectively right now?

The researchers will keep debating. The CEOs will keep predicting. The safety people will keep warning. All of that is important work.

Your work is different. Your work is to learn the tools, understand their limits, and get better at the things they can't do. That's the bet that pays off regardless of which camp turns out to be right.

Get the Radar in your inbox

What actually mattered in AI, every week.

Subscribe to the Radar