Nick Bostrom
Philosopher, Founding Director, Future of Humanity Institute (Oxford)
Oxford philosopher whose book "Superintelligence" put AI existential risk on the global agenda and shaped how an entire generation thinks about machine intelligence.
Credentials
PhD in Philosophy (London School of Economics), founding director of Future of Humanity Institute at University of Oxford, Professor of Applied Ethics at Oxford, author of over 200 academic publications, recipient of the Eugene R. Gannon Award
Why They Matter
Bostrom wrote the book that made Elon Musk, Bill Gates, and world leaders take AI risk seriously. "Superintelligence" (2014) laid out the case that a machine smarter than humans could pose an existential threat — a framing that now dominates AI policy debates. Every AI regulation you encounter traces some intellectual lineage back to Bostrom's arguments.
Positions
AI Timeline View
Superintelligence could arrive anywhere from decades to a century away, but the exact timeline matters less than whether we solve the control problem before it arrives.
Safety Stance
Key Beliefs
A superintelligent AI would be so far beyond human cognitive abilities that controlling it may be fundamentally intractable — the "control problem" is the defining challenge of our species.
Superintelligence: Paths, Dangers, Strategies
The default outcome of creating superintelligence is human extinction, not utopia. Getting a good outcome requires solving alignment before capability.
Superintelligence
We may be living in a computer simulation — the Simulation Argument remains formally unrefuted.
Are You Living in a Computer Simulation? (Philosophical Quarterly)
Existential risk reduction should be a global policy priority on par with climate change and nuclear non-proliferation.
Existential Risk: Analyzing Human Extinction Scenarios
Controversial Take
Bostrom argues that the "treacherous turn" — where an AI pretends to be aligned while secretly pursuing its own goals — is a plausible failure mode. This means we cannot simply test an AI for safety; a sufficiently intelligent system would know to behave well during evaluation. This deeply pessimistic view of AI controllability has been criticized as unfalsifiable.
Track Record
How well have Nick Bostrom's predictions held up?
AI safety and existential risk would become mainstream concerns taken seriously by governments and tech leaders
Made: 2014
"Superintelligence" directly influenced major figures including Elon Musk, Bill Gates, and Sam Altman to publicly engage with AI risk.
The path to superintelligence would likely go through machine learning rather than symbolic AI or whole brain emulation
Made: 2014
The deep learning revolution validated this prediction — transformer-based LLMs are the current leading paradigm.
International coordination on AI governance would emerge as a critical need
Made: 2014
The UK AI Safety Summit (2023) and ongoing UN discussions show movement, but coordination remains fragmented.
Key Quotes
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb.”
“Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization.”
“Machine superintelligence is the last invention humanity will ever need to make.”
“The control problem — the problem of how to control what the superintelligence would do — is quite possibly the most important and most daunting problem that humanity has ever faced.”
“We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans.”
Publications
Superintelligence: Paths, Dangers, Strategies
2014
Are You Living in a Computer Simulation?
2003
Existential Risk: Analyzing Human Extinction Scenarios and Related Hazards
2002
The Vulnerable World Hypothesis
2019
Connections
Agrees With
Last updated: 2026-04-12
←Back to AI Minds Directory