Skip to main content
Curated AI Intelligence

The Radar

We track everything in AI so you don't have to. Video breakdowns, tool reviews, and hype checks — curated by people who actually build with this stuff.

Video BreakdownMarch 2026

What Mo Gawdat Actually Said About AI (And What He Got Wrong)

The Diary of a CEO episode that scared half the internet. We watched it so you don't have to. Here's what matters.

Read article
Tool ReviewMarch 2026

Claude vs ChatGPT for Real Business Tasks — An Honest Comparison

We tested both on 10 actual business scenarios. The results might surprise you.

Read article
Hype CheckMarch 2026

AGI by 2027? What 50 AI Researchers Actually Say

Everyone has a timeline. We mapped 50 expert predictions. Here's the honest picture.

Read article
Video BreakdownComing soon

Mustafa Suleyman on Why AI Changes Everything

The Microsoft AI CEO's DOAC appearance, dissected. Bold claims, real implications.

Weekly RoundupComing soon

What Actually Mattered in AI This Week

Model releases, tool launches, and industry shifts — filtered through what matters for your business.

Tool ReviewComing soon

We Tested 5 AI Presentation Tools. Here's the Verdict.

Gamma, Beautiful.ai, Tome, Canva AI, and SlidesAI walk into a bar...

Video Intelligence Briefs

50 deep-dive breakdowns — claims checked, hype filtered, action items extracted.

Video BreakdownSatya Nadella

Satya Nadella on the AI Platform Shift, Copilot Strategy, and Why Microsoft Bet Everything on OpenAI

Microsoft's CEO lays out the most ambitious AI platform play since Windows — embedding Copilot into every product, betting $13B on OpenAI, and rewriting the enterprise software stack around AI agents.

Read brief →
Video BreakdownEric Schmidt

Eric Schmidt on AI Arms Races, China, and Why Silicon Valley Must Work With the Pentagon

Former Google CEO and Pentagon AI advisor Eric Schmidt argues that the US-China AI competition is a national security emergency — and that Silicon Valley's reluctance to work with the military is a strategic vulnerability.

Read brief →
Video BreakdownYuval Noah Harari

Yuval Noah Harari on AI, the End of Human-Dominated History, and the Stories We Tell Ourselves

The historian who wrote Sapiens argues that AI's real danger isn't superintelligence — it's that machines can now create the stories, ideologies, and religions that hold human civilization together.

Read brief →
Video BreakdownEmad Mostaque

Emad Mostaque on Open-Source AI, Stable Diffusion, and Why the Future of AI Must Be Decentralized

The founder of Stability AI makes the case that open-source AI is the only path to preventing a dystopian concentration of power — then watches his own company implode, stress-testing every claim he made.

Read brief →
Video BreakdownFrancois Chollet

Francois Chollet on Measuring Intelligence, the ARC Prize, and Why LLMs Are Not the Path to AGI

The creator of Keras and the ARC benchmark makes the most technically precise case for why LLMs are memorization engines, not intelligence — and puts $1 million on the line to prove it.

Read brief →
Video BreakdownEliezer Yudkowsky

Eliezer Yudkowsky on AI Doom, Alignment, and Why He Thinks We're Not Going to Make It

The intellectual godfather of AI alignment lays out his case that humanity is on a default path to extinction from AI — not because AI is evil, but because we don't know how to specify what we want.

Read brief →
Video BreakdownMax Tegmark

Max Tegmark on AI Existential Risk, the Pause Letter, and Building a Future We Actually Want

The MIT physicist who co-authored the famous AI pause letter makes the case that rushing to build superintelligence without safety guarantees is like launching rockets without knowing how to steer — and the destination is everyone on board.

Read brief →
Video BreakdownEthan Mollick

Ethan Mollick on AI Reshaping Work, Education, and Why You Should Use AI for Everything Right Now

Wharton professor and the most practical AI thinker in academia argues that the biggest risk with AI isn't using it wrong — it's not using it at all, and the window to build personal AI fluency is closing fast.

Read brief →
Video BreakdownTristan Harris

Tristan Harris on AI Supercharging the Attention Economy and Why We're Losing the Race to Protect Human Agency

The Social Dilemma filmmaker argues that AI isn't just making the attention economy more powerful — it's making it personal, persuasive, and virtually impossible to resist without structural intervention.

Read brief →
Video BreakdownNoam Shazeer

Noam Shazeer on Building Language Models, the Transformer Origin Story, and Why Character AI Is the Future of Human-AI Interaction

One of the eight co-authors of "Attention Is All You Need" — the paper that created the Transformer architecture behind every modern AI model — explains how it happened, what he learned, and why he built Character.AI.

Read brief →
Video BreakdownMarc Andreessen

Marc Andreessen on AI, Techno-Optimism, and Why the Doomers Are Wrong

a16z's co-founder lays out the most aggressive pro-AI case in venture capital — total optimism, zero patience for safety concerns, and a worldview where regulation is the real danger.

Read brief →
Video BreakdownVinod Khosla

Vinod Khosla on AI Replacing 80% of Jobs — And Why He Thinks That's Good

Sun Microsystems co-founder turned VC kingmaker argues that AI will replace 80% of jobs in 80% of occupations — and that this is a feature, not a bug.

Read brief →
Video BreakdownSal Khan

Sal Khan on AI in Education: Khanmigo and the Future of Personalised Learning

Khan Academy's founder makes the most credible case for AI in education — not hype about replacing teachers, but a working prototype of an AI tutor that guides instead of giving answers.

Read brief →
Video BreakdownBrian Chesky

Brian Chesky on How AI Will Reinvent Airbnb — And Why Most Companies Are Using AI Wrong

Airbnb's CEO argues that most companies are bolting AI onto existing products when they should be redesigning products from scratch around AI — and he's rebuilding Airbnb to prove it.

Read brief →
Video BreakdownTobi Lütke

Tobi Lütke on AI at Shopify: Why Every Employee Now Has to Prove AI Can't Do Their Job

Shopify's CEO makes AI fluency a job requirement for every employee and tells them to prove AI can't do their task before asking for more headcount — the most concrete AI-first management mandate from any public company CEO.

Read brief →
Video BreakdownTimnit Gebru

Timnit Gebru on AI Bias: The Harms Are Already Here, Not Hypothetical

The researcher Google fired for a paper on AI harms makes the case that current AI systems are already causing measurable damage to marginalised communities — and the industry's focus on hypothetical AGI risk is a deliberate distraction.

Read brief →
Video BreakdownArvind Narayanan

Arvind Narayanan on AI Snake Oil: How to Tell What's Real From What's Fake

Princeton computer scientist Arvind Narayanan offers the most useful framework for separating real AI capabilities from snake oil — and it turns out most of what's being sold as 'AI' in enterprise software is the latter.

Read brief →
Video BreakdownMarques Brownlee

MKBHD on AI Hardware: The Humane AI Pin, Rabbit R1, and Why AI Gadgets Keep Failing

The internet's most trusted tech reviewer delivers the most devastating consumer verdict on AI hardware — the Humane AI Pin is the 'worst product I've ever reviewed,' and it exposes why standalone AI gadgets keep failing.

Read brief →
Video BreakdownLex Fridman (solo / meta-commentary)

Lex Fridman on the AI Conversation: What 100+ Interviews With AI Leaders Taught Him

The host who's interviewed Altman, Musk, Zuckerberg, Hinton, LeCun, and nearly every major voice in AI shares the meta-patterns he's observed — and the contradictions none of his guests acknowledge.

Read brief →
Video BreakdownKevin Scott

Kevin Scott on Microsoft Copilot, AI Developer Tools, and the Platform Shift

Microsoft's CTO lays out the most ambitious AI platform play in tech — Copilot everywhere, from code editors to spreadsheets to enterprise workflows — and the economic bet that AI becomes the next Windows-scale platform shift.

Read brief →
Video BreakdownAravind Srinivas

Aravind Srinivas on Building Perplexity and the Future of AI-Powered Search

Perplexity's CEO lays out why the search engine of the future looks nothing like ten blue links — and why Google's response proves he's right.

Read brief →
Video BreakdownClem Delangue

Clem Delangue on Hugging Face, Open-Source AI, and Why the Community Wins

The CEO of Hugging Face explains why open-source AI isn't just an ideology — it's a structural economic advantage that closed-model companies can't outrun.

Read brief →
Video BreakdownHarrison Chase

Harrison Chase on LangChain, AI Application Building, and the Agent Stack

LangChain's creator breaks down why the AI application layer is where all the value will accrue — and admits the framework's early chaos taught him more about developer experience than any computer science degree.

Read brief →
Video BreakdownGeorge Hotz

George Hotz on tinygrad, comma.ai, and Building AI from Scratch

The hacker who jailbroke the iPhone and built a self-driving car in his garage argues that AI infrastructure is too bloated, too expensive, and too corporate — and he's building the alternative with 7,000 lines of code.

Read brief →
Video BreakdownMira Murati

Mira Murati on AI Development Philosophy and the Road to GPT-5

OpenAI's CTO (before her departure) gives the most technically candid insider view of how the world's most influential AI lab actually builds and ships models — and where the cracks in the foundation are.

Read brief →
Video BreakdownKai-Fu Lee

Kai-Fu Lee on AI 2.0, Large Models, and Why ASEAN Is the Next AI Battleground

Kai-Fu Lee returns with a sharper thesis — the AI Superpowers era is over, the AI 2.0 era of large models has begun, and Southeast Asia might be where the real deployment story plays out.

Read brief →
Video BreakdownAaron Levie

Aaron Levie on AI for Enterprise, Box AI, and Why This Time Is Different

Box's CEO argues that enterprise AI isn't about chatbots — it's about restructuring how companies process, analyze, and act on their own documents and data, and most organizations are doing it backwards.

Read brief →
Video BreakdownClara Shih

Clara Shih on Enterprise AI Adoption and Why Most Companies Are Stuck at the Starting Line

Salesforce's AI CEO delivers the most honest assessment of enterprise AI adoption from the inside — most companies aren't failing at AI because the technology isn't ready, they're failing because their organizations aren't.

Read brief →
Video BreakdownSinead Bovell

Sinead Bovell on the Future of Work, AI, and Preparing for Jobs That Don't Exist Yet

A futurist and former model makes the case that we're preparing an entire generation for a job market that won't exist — and the educational system's response to AI is dangerously slow.

Read brief →
Video BreakdownVlad Tenev

Vlad Tenev on Robinhood AI, Fintech Democratization, and Personalized Financial Advice

Robinhood's CEO argues that AI will finally deliver on fintech's original promise — making sophisticated financial advice accessible to everyone, not just the wealthy — and he's betting the company's next chapter on it.

Read brief →
Video BreakdownElon Musk

Elon Musk on AI, Neuralink, and Why AGI Timelines Keep Shrinking

Musk maps his entire AI thesis — Tesla bots, Neuralink, xAI, and why he thinks AGI arrives before most people have updated their iPhone.

Read brief →
Video BreakdownGeoffrey Hinton

Geoffrey Hinton Left Google to Warn Us — Here's What He Actually Said

The man who built the foundations of deep learning says he regrets his life's work — and his reasons are more specific than the headlines suggest.

Read brief →
Video BreakdownDario Amodei

Dario Amodei on Building Claude, Responsible Scaling, and Why Anthropic Exists

The Anthropic CEO explains why he left OpenAI, how constitutional AI works in practice, and what 'responsible scaling' actually commits you to.

Read brief →
Video BreakdownJensen Huang

Jensen Huang's GTC 2024 Keynote: The GPU King Declares a New Industrial Revolution

Jensen Huang spent two hours making the case that GPU computing isn't just the backbone of AI — it's the foundation of a new industrial revolution. Here's what holds up and what's a sales pitch in a leather jacket.

Read brief →
Video BreakdownMustafa Suleyman

Mustafa Suleyman's Coming Wave: AI and Bio Convergence, Containment, and Why Regulation Is Already Too Late

DeepMind co-founder turned Microsoft AI CEO argues that AI and biotech are converging into a wave that can't be contained — and that we have maybe a decade to figure out governance before it's moot.

Read brief →
Video BreakdownYann LeCun

Yann LeCun on Lex Fridman: Why LLMs Won't Reach AGI and the Case for Open Source AI

Meta's chief AI scientist makes the technical case that autoregressive LLMs are a dead end for AGI, proposes a radically different architecture, and argues that open-source AI is the only path that doesn't end in corporate monopoly.

Read brief →
Video BreakdownMark Zuckerberg

Mark Zuckerberg on Open Source AI, Meta Strategy, and the AR/VR Convergence

Zuckerberg makes the case that open-sourcing Llama isn't charity — it's Meta's competitive strategy against a world where OpenAI and Google control the API layer.

Read brief →
Video BreakdownAndrew Ng

Andrew Ng on AI as the New Electricity and Why Most Companies Are Doing It Wrong

Andrew Ng's 'AI is the new electricity' thesis is three years old now — here's what held up, what didn't, and what he's still getting right about adoption that most executives ignore.

Read brief →
Video BreakdownBill Gates

Bill Gates on Why AI Is the Most Important Tech Advance in Decades

Bill Gates calls AI the most important tech advance since the GUI — and backs it up with specific bets on healthcare, education, and agents. Here's what holds up two years later.

Read brief →
Video BreakdownTristan Harris & Aza Raskin

The AI Dilemma: Social Media's Architects Warn History Is Repeating

The guys who called social media's harms before Congress now argue AI is repeating the same pattern — faster, with higher stakes and fewer guardrails.

Read brief →
Video BreakdownIlya Sutskever

Ilya Sutskever on Scaling, Superintelligence, and Why He Left OpenAI

OpenAI's co-founder and former chief scientist lays out the scaling laws that built GPT, then walks away from the company to start a safety-first lab — the gap between those two facts is the story.

Read brief →
Video BreakdownDemis Hassabis

Demis Hassabis: From AlphaFold to Nobel Prize — AI as a Tool for Science

The first AI researcher to win a Nobel Prize makes the case that AI's highest-value application is not chatbots or image generation — it is accelerating scientific discovery.

Read brief →
Video BreakdownKai-Fu Lee

AI Superpowers: China, Silicon Valley, and the New World Order

Former Google China head Kai-Fu Lee breaks down why AI implementation — not research — decides who wins the US-China AI race, and what that means for the 800 million jobs in the crosshairs.

Read brief →
Video BreakdownGary Marcus

AI Hype vs Reality: Why Deep Learning Alone Won't Get Us to AGI

NYU cognitive scientist Gary Marcus makes the most technically grounded case for why LLMs are stuck — and why the 'scale is all you need' crowd is building on sand.

Read brief →
Video BreakdownSundar Pichai

Google I/O 2024: The AI-Powered Future of Google

Google bets the entire company on AI at I/O 2024 — Gemini everywhere, AI Overviews in Search, and Project Astra as the prototype for a universal AI assistant.

Read brief →
Video BreakdownMo Gawdat

AI Utopia or Dystopia? Ex-Google Exec Mo Gawdat Warns of a Short-Term Hell

Mo Gawdat, former Google X chief business officer, warns of a short-term dystopia before humanity can achieve a utopia with AI.

Read brief →
Video BreakdownSam Altman

Sam Altman on OpenAI, AGI, Power Struggles, and the Future of Humanity

Sam Altman discusses OpenAI, AGI, power struggles, and the future of humanity

Read brief →
Video BreakdownAndrej Karpathy

Demystifying Large Language Models

An introductory talk on large language models — what they are, how they're trained, and their real security vulnerabilities.

Read brief →
Video BreakdownFei-Fei Li

AI's Next Frontier: Spatial Intelligence

AI pioneer Fei-Fei Li argues that the next frontier for AI is spatial intelligence — understanding and acting in the 3D physical world.

Read brief →
Video BreakdownChamath, Jason, Sacks & Friedberg

2025 Predictions: Tech, Business, Media, Politics!

The All-In Podcast hosts share 2025 predictions for tech, business, media, and politics — with one strong signal tool and one section to ignore entirely.

Read brief →
Video Breakdown 6 min readMarch 2026

What Mo Gawdat Actually Said About AI (And What He Got Wrong)

Mo Gawdat's appearance on The Diary of a CEO is one of the most-watched AI interviews on YouTube, with tens of millions of combined views across clips and the full episode. Former Chief Business Officer at Google X, author of Scary Smart, and a man who speaks about artificial intelligence with the urgency of someone reporting a house fire.

The interview scared a lot of people. It was designed to. And buried under the fear, there are some legitimate points worth pulling out — alongside a few claims that don't hold up when you look at the evidence.

Here's what Gawdat actually said, what he got right, and where the argument falls apart.

The Core Claims

Gawdat's thesis rests on a few key pillars. First, that AI will surpass human intelligence by 2029 — not in one narrow domain, but broadly. Second, that when this happens, most human jobs become redundant. Third, that there's a meaningful chance this doesn't end well for our species, particularly if we fail to instil “good values” into these systems before they outpace us. And fourth, that the window to act is closing fast, possibly already closed.

He delivers these claims with conviction and genuine emotion. He talks about his late son, about purpose, about the responsibility we carry. It's compelling television. The question is whether it's accurate prediction or well-intentioned catastrophising.

What He Got Right

AI is advancing faster than most people expected. This is inarguable. In 2020, GPT-3 could barely write a coherent paragraph. By 2024, AI systems were passing bar exams, writing production code, and generating photorealistic video from text prompts. The pace of improvement has surprised even researchers inside the labs building these systems. Gawdat's general point — that people are underestimating the speed — is correct.

Job displacement is real and already happening. Not in the dramatic “robots replace everyone” way, but in specific, measurable ways. Translation agencies have lost revenue. Junior copywriting roles have shrunk. Customer service teams are being restructured around AI-first workflows. Coding assistants are changing what it means to be a junior developer. Gawdat is right that this isn't hypothetical — it's underway.

Safety concerns are legitimate. The alignment problem — ensuring AI systems do what we actually want them to do — is a real technical challenge. It's not science fiction. Researchers at every major lab acknowledge this. The question isn't whether safety matters; it's how urgent the timeline is and what approach actually works.

Where It Falls Apart

The 2029 timeline is a guess, not a forecast. Gawdat presents a specific year with the confidence of someone reading a train schedule. But no one in the field — not the optimists, not the pessimists — can predict when artificial general intelligence arrives with that precision. The honest answer is: we don't know. Making specific year predictions sounds authoritative but it's unfalsifiable speculation dressed up as analysis.

The “all jobs” framing misrepresents how technology works. AI doesn't replace jobs wholesale. It replaces tasks. A financial analyst who used to spend 20 hours building spreadsheet models now spends 4 hours doing it with AI assistance — but they still need to interpret the results, understand the client's situation, and make judgment calls. The “all jobs disappear” narrative makes for gripping content but ignores how technology adoption actually plays out in organisations.

The doom framing lacks nuance. Gawdat presents a binary: either we solve alignment perfectly and everything is fine, or we fail and civilisation is at risk. This skips over the vast middle ground where AI systems are powerful but bounded, useful but imperfect, transformative but manageable. Most technology falls into this middle ground. AI almost certainly will too.

What He Missed Entirely

The biggest gap in Gawdat's argument is the concept of AI as a tool amplifier rather than a human replacement. The most successful AI deployments right now aren't replacing people — they're making people dramatically more effective. A marketing team of three that can now produce the output of a team of ten. A solo developer shipping what used to require a five-person squad. A consultant who can research, analyse, and present in hours instead of weeks.

This “augmentation” path doesn't make for scary podcast clips, but it's what's actually happening in the majority of businesses adopting AI. And it leads to a very different set of implications than the replacement narrative suggests.

Gawdat also misses something more fundamental: most businesses are still struggling with basic AI adoption. While he's talking about superintelligence and existential risk, the average company is trying to figure out how to get their sales team to use ChatGPT consistently. The gap between frontier AI capabilities and actual business adoption is enormous, and it's not closing as fast as the technology is advancing.

The NerdSmith Take

Gawdat raises real concerns but wraps them in fear. The practical reality? AI is changing work, but it's changing it task by task, not job by job. The displacement is real but it's gradual and uneven, not the overnight extinction event the interview implies.

Your job isn't to panic — it's to learn which tasks AI handles well and which still need you. That's not a terrifying proposition. It's a practical one. And it's exactly the kind of work that makes you more valuable, not less.

Watch the interview if you want to. It's genuinely interesting. Just don't let it paralyse you. The best response to “AI is coming” has never been fear. It's competence.

Tool Review 7 min readMarch 2026

Claude vs ChatGPT for Real Business Tasks — An Honest Comparison

Every AI tool comparison you've read follows the same formula: list features, compare pricing, declare a winner. None of them tell you what actually matters — which tool performs better when you sit down to get real work done.

So we tested both. Not on benchmarks or party tricks, but on 10 tasks that business owners, consultants, and managers actually do every week. We used Claude (Anthropic) and ChatGPT (OpenAI), both on their latest available models via paid plans, with identical prompts for each task.

Here's what happened.

The 10 Tasks — Results at a Glance

TaskWinner
1. Drafting a client proposalClaude
2. Summarising a 20-page reportTie
3. Writing marketing copyChatGPT
4. Analysing financial dataClaude
5. Customer email responseTie
6. Creating a meeting agendaTie
7. Strategic brainstormingClaude
8. Social media contentChatGPT
9. Code assistanceClaude
10. Research synthesisClaude

Final tally: Claude 5, ChatGPT 2, Tie 3.

Task-by-Task Breakdown

1. Client Proposal — Claude wins. We asked both to draft a consulting proposal for a mid-size company implementing AI workflows. Claude produced a structured, professional document with clear scope boundaries, assumptions, and pricing rationale. ChatGPT's version was competent but more generic — it read like a template. Claude's read like something you'd actually send.

2. Report Summarisation — Tie. Both handled a 20-page market research report well. Claude was slightly more concise. ChatGPT included a few more specific data points. Neither made errors. For this task, both tools are genuinely good enough.

3. Marketing Copy — ChatGPT wins. We asked for landing page copy for a B2B SaaS product. ChatGPT's output was more energetic, punchier, and had better hooks. Claude's version was polished but played it safer. For copy that needs to grab attention, ChatGPT has an edge — though you'll sometimes need to dial back the enthusiasm.

4. Financial Analysis — Claude wins. Given a profit-and-loss statement with some unusual line items, Claude flagged assumptions, noted anomalies, and was upfront about what the numbers might or might not mean. ChatGPT gave a competent analysis but didn't flag the same caveats. When accuracy matters more than speed, Claude's caution is a feature.

5. Customer Email — Tie. Both wrote professional, empathetic responses to a complaint email. Marginal differences in tone. Either would work.

6. Meeting Agenda — Tie. Both produced clean, logical agendas for a quarterly strategy review. This is a task where AI tools have been good for over a year. No meaningful difference.

7. Strategic Brainstorming — Claude wins. We asked both to brainstorm go-to-market strategies for an AI training company entering the Malaysian market. Claude explored second-order effects, challenged assumptions in the prompt, and offered a few non-obvious angles. ChatGPT produced a solid list but stayed surface-level. For thinking that goes deeper than “here are 10 ideas,” Claude is consistently better.

8. Social Media Content — ChatGPT wins. We asked for a week of LinkedIn posts about AI adoption. ChatGPT's posts were snappier, used better hooks, and had stronger calls to action. Claude's were more thoughtful but too long for the format. Short-form content that needs to stop the scroll? ChatGPT.

9. Code Assistance — Claude wins. Both wrote functional Python scripts for a data processing task. The difference was in explanation. Claude explained why it chose certain approaches, flagged edge cases, and offered alternatives. ChatGPT gave working code with less context. If you're learning or debugging, the explanation matters.

10. Research Synthesis — Claude wins. Given five conflicting articles about AI regulation, Claude produced a synthesis that acknowledged nuance, identified where the disagreements actually were, and avoided false balance. ChatGPT's summary was accurate but flatter — it listed points without analysing the tensions between them.

The Verdict

Claude for thinking. ChatGPT for doing. When the task requires depth, nuance, careful reasoning, or working through complexity, Claude consistently performs better. When the task requires energy, speed, punchy output, or high-volume content, ChatGPT has an edge.

Both are good. Both are worth the $20/month if you use them regularly. The best tool is the one you actually use — and the real power move is using both, each for what it does best.

Price Comparison

PlanClaudeChatGPT
FreeLimited daily messagesLimited GPT-4o access
Pro / Plus$20/mo$20/mo
Team / Business$30/mo per seat$25/mo per seat

The NerdSmith Recommendation

Start with the free tiers of both. Use them for a week on your actual work, not toy examples. You'll feel the difference quickly.

Use Claude when you need depth — proposals, analysis, strategy, anything where getting it right matters more than getting it fast. Use ChatGPT when you need speed and energy — social content, marketing copy, quick drafts you'll heavily edit anyway.

If you can only pay for one? It depends on your work. If you spend more time thinking and analysing, Claude. If you spend more time creating and publishing, ChatGPT. But honestly, $40/month for both is the best ROI in business software right now.

Hype Check 7 min readMarch 2026

AGI by 2027? What 50 AI Researchers Actually Say

Depending on who you listen to, artificial general intelligence — AI that can do anything a human can do, at the same level or better — is either arriving next year or never. The predictions span decades and the confidence levels are wildly inconsistent.

So we mapped what 50 of the most prominent voices in AI research and industry actually believe, based on their public statements, papers, and interviews. Not what headlines say they said. What they actually said.

The picture is more complicated, more honest, and more useful than any single prediction.

Camp 1: Before 2030

Sam Altman, Dario Amodei, Demis Hassabis, Jensen Huang, Elon Musk

This camp is dominated by people building AI systems, running AI companies, or selling AI hardware. That doesn't automatically disqualify their views, but it's important context.

Sam Altman has repeatedly suggested AGI could arrive by the late 2020s and that it might be “less of a big deal than people think.” Dario Amodei, CEO of Anthropic, has talked about “powerful AI” arriving within a few years that could transform science and medicine. Demis Hassabis of Google DeepMind has suggested we could see AGI-level capabilities by 2030. Jensen Huang, CEO of NVIDIA, has placed it around 2028-2029.

The common thread: these predictions tend to come with caveats that get stripped out by headlines. Amodei talks about “powerful AI” rather than AGI specifically. Hassabis qualifies what capabilities he means. Altman has shifted his timeline multiple times. They're saying “something transformative is close,” not necessarily “full human-level AGI in three years.”

Camp 2: 2030 to 2050

Andrew Ng, Sundar Pichai, Fei-Fei Li, Rodney Brooks

This group believes AI will be profoundly significant but sees the timeline as longer and the progress as more gradual than Camp 1 suggests.

Andrew Ng, co-founder of Google Brain and Coursera, has consistently argued that while AI is powerful, the path to AGI is longer than hype cycles suggest. He focuses on practical AI deployment rather than AGI speculation. Sundar Pichai speaks about AI as “the most profound technology humanity will work on” but has been careful about timelines. Fei-Fei Li emphasises the gap between narrow AI excellence and general intelligence. Rodney Brooks, robotics pioneer and former director of MIT CSAIL, has been publicly tracking failed AI predictions for years and consistently argues the field overpromises on timelines.

Their position: the technology is real, the impact will be enormous, but the jump from “very good at specific tasks” to “generally intelligent” is harder than scaling up current approaches.

Camp 3: Much Later, or Never (As Currently Defined)

Yann LeCun, Gary Marcus, François Chollet, Melanie Mitchell

This is the camp that gets the least media attention because “we don't know and current approaches probably aren't enough” doesn't make a good headline.

Yann LeCun, chief AI scientist at Meta and a Turing Award winner, has argued publicly and repeatedly that large language models are a dead end for AGI. He believes we need fundamental new architectures — specifically what he calls “world models” — and that we're missing key pieces of the puzzle. Gary Marcus, cognitive scientist and persistent AI critic, has placed substantial bets that AGI won't arrive by various proposed deadlines. François Chollet, creator of the Keras deep learning framework, has argued that current AI systems are not showing signs of general intelligence and that benchmarks are being gamed. Melanie Mitchell, complexity researcher at the Santa Fe Institute, has written extensively about how AI systems appear more intelligent than they are.

Their argument isn't that AI is unimpressive. It's that the jump from pattern matching at scale to genuine understanding and reasoning may require breakthroughs we haven't had yet — and that calling current systems “almost AGI” misunderstands what intelligence actually is.

Camp 4: The Timeline Doesn't Matter

Geoffrey Hinton, Eliezer Yudkowsky, Stuart Russell, Max Tegmark

This group sidesteps the “when” question entirely and focuses on “what happens if.” Geoffrey Hinton, the “Godfather of AI,” left Google specifically to speak freely about risks. His concern isn't about a specific date — it's that we don't have reliable methods to control systems that are smarter than us, and we should figure that out before we need to.

Eliezer Yudkowsky, who has been writing about AI risk since the early 2000s, argues that the specific timeline matters less than the fact that we have no proven alignment solution. Stuart Russell, professor at Berkeley and author of the standard AI textbook, has argued for fundamentally rethinking how we build AI systems to be inherently safe. Max Tegmark focuses on existential risk governance.

Their position: whether AGI arrives in 2028 or 2058, the safety work needs to happen now, and the current approach of “build first, align later” is reckless.

The Honest Picture

Nobody knows. That's the only honest answer. The people building AI say “soon” — but they have financial incentives to maintain urgency and excitement. The people studying intelligence say “maybe never with current methods” — but paradigm shifts can happen fast. The people worried about safety say “the timeline doesn't matter” — and they may have the strongest point of all.

What is clear: expert opinion is genuinely divided, and anyone presenting a single confident timeline is selling something — whether that's a product, a book, a worldview, or attention.

What This Means for You

If you're a business owner, a manager, a professional trying to figure out how AI affects your career — the AGI timeline is irrelevant to your next 12 months.

Here's what is relevant: AI tools available today can save you 5 to 15 hours per week if you know how to use them. They can handle first drafts, data analysis, research synthesis, customer communications, and routine coding tasks. They cannot replace your judgment, your relationships, your domain expertise, or your ability to navigate ambiguity.

Whether a superintelligence arrives in 2027 or 2047, the smart move right now is the same: learn to work with the AI tools that exist, understand their limitations, and focus on the skills that remain distinctly human — critical thinking, leadership, creativity under constraints, and the ability to ask the right questions.

The NerdSmith Take

We're agnostic on AGI timelines. Not because the question doesn't matter — it does, deeply, for policy and safety research. But because for the people we work with, the practical question is simpler: how do I use AI effectively right now?

The researchers will keep debating. The CEOs will keep predicting. The safety people will keep warning. All of that is important work.

Your work is different. Your work is to learn the tools, understand their limits, and get better at the things they can't do. That's the bet that pays off regardless of which camp turns out to be right.

Stop watching. Start building.

Live workshops where you work on your own business problems — not hypothetical case studies.

Join the Founding Cohort

Get the Radar in your inbox

What actually mattered in AI, every week.

Subscribe to the Radar