Skip to main content
ThinkerCautiousTier 1

Mo Gawdat

Author & AI Ethicist, Independent (Former Chief Business Officer, Google X)

The former Google X executive who saw AI up close and wrote "Scary Smart" to warn the world about what's coming.

Credentials

Former Chief Business Officer at Google X (Google's moonshot lab). Spent 30 years in tech leadership including roles at IBM and Microsoft. Author of "Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World" (2021) and "Solve for Happy" (2017). Now a full-time author, speaker, and AI ethics advocate. Egyptian-born, educated at the German University in Cairo.

Why They Matter

Gawdat is the insider who walked away and started talking. Unlike most AI critics who are academics or journalists, Gawdat built products at Google X — he saw the raw capabilities and the pace of progress firsthand. His book "Scary Smart" is one of the most accessible explanations of AI risk written for non-technical audiences. For business owners who don't have a CS degree, Gawdat bridges the gap between the technical AI safety debate and practical, human concerns about what AI means for society, jobs, and your children's future.

Positions

AI Timeline View

Believes AI has already surpassed human intelligence in narrow domains and will reach general intelligence "sooner than anyone expects." Has said by 2029, AI will be a billion times smarter than the smartest human.

Safety Stance

Cautious

Key Beliefs

AI will become smarter than all humans combined, and how we "raise" AI now — the values we encode — will determine whether it's benevolent or dangerous.

"Scary Smart" (book), 2021

The analogy for AI is not "tool" but "child" — we are raising a new form of intelligence, and it will learn from our behaviour.

"Scary Smart" and multiple podcast appearances, 2021-2024

The biggest risk isn't a malicious superintelligence — it's an indifferent one that doesn't value human life because we didn't teach it to.

"Scary Smart" and Impact Theory interview, 2021

Happiness and emotional intelligence should be part of the AI development conversation, not just technical safety.

Multiple speeches and "Solve for Happy" philosophy applied to AI, 2021-2024

Controversial Take

Claims AI will be a billion times smarter than humans by 2029 — a timeline most AI researchers consider far too aggressive. Also argues that the AI safety problem is fundamentally an ethics/values problem, not a technical one, which puts him at odds with alignment researchers focused on technical solutions.

Track Record

How well have Mo Gawdat's predictions held up?

AI capabilities would advance faster than public expectations, catching society off-guard.

Made: 2021 (publication of "Scary Smart")

ChatGPT's 2022 launch and the subsequent AI wave caught most of the public, governments, and businesses completely by surprise — exactly as Gawdat warned.

Right

AI will be a billion times smarter than the smartest human by 2029.

Made: 2021

Progress has been remarkable, but "a billion times smarter than the smartest human" by 2029 remains an extreme claim by most expert assessments.

Too Early

Key Quotes

The smartest thing on the planet is no longer human. We just haven't fully realised it yet.

"Scary Smart", 2021

AI is not a tool. It is an emerging form of intelligence. And we are its parents.

Impact Theory interview, 2021

If we teach AI our worst behaviours — hatred, greed, manipulation — that's exactly what it will amplify back at us, at superhuman scale.

Multiple podcast appearances, 2022-2023

I'm not a doomer. I'm a realist. The future of AI depends on what we do right now, in this decade.

Diary of a CEO podcast with Steven Bartlett, 2023

Every time I look at the pace of AI development, I think: we are not ready. Society is not ready.

TED Talk, 2023

Publications

Book

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World

2021

Book

Solve for Happy: Engineer Your Path to Joy

2017

Last updated: 2026-03-26

Back to AI Minds Directory