Skip to main content
NerdSmith AI Minds

AI Minds

The leaders, researchers, and builders shaping AI — who they are, what they believe, and where they disagree.

Category:
Tier:

Showing 20 of 20 profiles

CEOCautiousTier 1

Demis Hassabis

CEO & Co-Founder, Google DeepMind

The neuroscientist-turned-AI-pioneer who built AlphaFold and won a Nobel Prize for it.

View profile →
CEOCautiousTier 1

Sam Altman

CEO, OpenAI

The man who brought AI to the mainstream with ChatGPT and became the most influential — and controversial — figure in AI.

View profile →
CEOCautiousTier 1

Dario Amodei

CEO & Co-Founder, Anthropic

The safety-focused ex-OpenAI researcher who built Anthropic and Claude to prove you can be both competitive and responsible.

View profile →
CEOCautiousTier 1

Elon Musk

CEO & Founder, xAI / Tesla / SpaceX

The billionaire provocateur who co-founded OpenAI, left in a feud, then built his own AI company to compete with it.

View profile →
CEOCautiousTier 1

Satya Nadella

Chairman & CEO, Microsoft

The Microsoft CEO who bet $13 billion on OpenAI and turned Microsoft into the world's most valuable AI platform company.

View profile →
CEOCautiousTier 1

Sundar Pichai

CEO, Alphabet / Google

The Google CEO steering a $2 trillion company through the biggest disruption to search since Google invented it.

View profile →
CEOOptimistTier 1

Jensen Huang

Founder, President & CEO, NVIDIA

The leather jacket-wearing GPU king whose chips power virtually every AI model on the planet.

View profile →
CEOOptimistTier 1

Mark Zuckerberg

Founder & CEO, Meta

The Meta CEO who pivoted from the metaverse to open-source AI, releasing LLaMA to challenge OpenAI's closed approach.

View profile →
ThinkerCautiousTier 1

Mo Gawdat

Author & AI Ethicist, Independent (Former Chief Business Officer, Google X)

The former Google X executive who saw AI up close and wrote "Scary Smart" to warn the world about what's coming.

View profile →
CEOCautiousTier 1

Mustafa Suleyman

CEO, Microsoft AI, Microsoft

The DeepMind co-founder turned Microsoft AI CEO who coined "the containment problem" and wrote the definitive book on AI governance.

View profile →
ResearcherOptimistTier 2

Yann LeCun

VP & Chief AI Scientist, Meta

Turing Award winner who invented convolutional neural networks and now leads Meta's AI research — the loudest optimist in the room.

View profile →
ResearcherDoomerTier 2

Geoffrey Hinton

Professor Emeritus, University of Toronto

Nobel Prize-winning "Godfather of AI" who quit Google to warn the world about the technology he helped create.

View profile →
ResearcherCautiousTier 2

Yoshua Bengio

Founder & Scientific Director, Mila (Quebec AI Institute)

Turing Award-winning deep learning pioneer who became one of AI's most prominent safety advocates — bridging pure research and public policy.

View profile →
ResearcherCautiousTier 2

Fei-Fei Li

Co-Director, Stanford Institute for Human-Centered AI (HAI), Stanford University

The researcher who taught AI to see — ImageNet sparked the deep learning revolution, and now she's steering AI toward serving humanity.

View profile →
ResearcherOptimistTier 2

Andrew Ng

Founder & CEO, DeepLearning.AI / AI Fund

The person who taught more people about AI than anyone else alive — Coursera co-founder, Google Brain creator, and the loudest voice for AI accessibility.

View profile →
ResearcherCautiousTier 2

Ilya Sutskever

Co-Founder & Chief Scientist, Safe Superintelligence Inc. (SSI)

OpenAI's former Chief Scientist who triggered the Sam Altman firing, then left to build "safe superintelligence" — arguably the most consequential researcher-turned-founder in AI.

View profile →
ResearcherCautiousTier 2

Jan Leike

Alignment Science Lead, Anthropic

The alignment researcher who led OpenAI's Superalignment team, resigned publicly over safety concerns, and joined Anthropic to continue the work.

View profile →
ResearcherCautiousTier 2

Andrej Karpathy

Independent Researcher & Educator, Independent (formerly Tesla, OpenAI)

The rare researcher who can build frontier AI AND explain it to normal people — ex-Tesla AI director, ex-OpenAI, and the best AI educator on YouTube.

View profile →
ResearcherOptimistTier 2

Francois Chollet

Software Engineer & AI Researcher, Google

Creator of Keras (the world's most popular deep learning library) and the person asking the hardest question in AI: what does it actually mean for a machine to be intelligent?

View profile →
ResearcherDoomerTier 2

Connor Leahy

CEO, Conjecture

Self-taught AI researcher turned safety startup CEO — the most outspoken young voice arguing that AI development is racing toward catastrophe.

View profile →

The Great AI Debates

The biggest questions in AI, mapped by who believes what — and why it matters for your business.

When does AGI arrive?

The most consequential timeline question in technology. Whether artificial general intelligence — AI that can do any intellectual task a human can — arrives in 4 years or 40 changes everything about how you invest, hire, and plan your business. The people building these systems disagree wildly on the answer.

Before 2030 (3)2030–2050 (3)Much later or never as defined (2)Doesn't matter — danger is NOW (2)
Explore debate →

Should AI development slow down?

In March 2023, over 30,000 people signed a letter calling for a 6-month pause on training AI systems more powerful than GPT-4. It split the AI world in two. This isn't an abstract policy debate — if regulation slows development in the West, it shifts power to China. If it doesn't slow down, we might build systems we can't control. Your business sits in the middle of this tug-of-war.

Pause / Regulate NOW (4)Cautious but continue (4)Full speed ahead (4)It's too late to slow down (1)
Explore debate →

Should AI models be open-source or closed?

This debate determines who controls the most powerful technology ever built. If AI stays closed, a handful of companies become gatekeepers to intelligence itself. If it goes open-source, anyone — including bad actors — gets access. For business owners, this directly affects your costs, vendor lock-in, and strategic options.

Open source (3)Closed for safety (2)It's complicated (3)Open is dangerous (1)
Explore debate →

Will AI take your job?

This is the question that keeps people up at night — and the one most likely to directly affect your employees, your hiring plans, and your business model. The AI leaders building these systems have very different answers, and the truth has massive implications for workforce planning in Malaysia and Southeast Asia.

Yes, massively (3)Transforms, not replaces (3)Only routine jobs (2)Overblown (2)
Explore debate →

Should we fear superintelligence?

Superintelligence — AI that surpasses human cognitive ability across every domain — is either the greatest threat humanity has ever faced or an overblown sci-fi scenario distracting from real problems. The people who built the foundations of modern AI are on both sides. How you think about this shapes how seriously you take AI safety, governance, and long-term business planning.

Existential risk (4)Serious but manageable (4)Not a real concern now (3)Fear is counterproductive (3)
Explore debate →

Stay sharp on AI leadership moves

When an AI leader changes their position or makes a bold prediction, we update their profile and send you the analysis. No hype, just signal.

No spam. Unsubscribe anytime.