Skip to main content
ThinkerDoomerTier 4

Eliezer Yudkowsky

Co-Founder & Senior Research Fellow, Machine Intelligence Research Institute (MIRI)

Self-taught AI alignment researcher and the most vocal doomer in AI discourse — argues humanity may not survive the creation of AGI.

Credentials

Co-founder of Machine Intelligence Research Institute (MIRI), no formal degrees (autodidact), prolific writer on LessWrong and Overcoming Bias, author of "Rationality: From AI to Zombies" and the Harry Potter fanfiction "HPMOR", one of the originators of the AI alignment research field

Why They Matter

Yudkowsky has been warning about AI risk since before ChatGPT made it mainstream — since 2001. He founded the field of AI alignment research and his ideas directly shaped the thinking of OpenAI, Anthropic, and DeepMind safety teams. He represents the extreme end of AI risk thinking, and even if you disagree, his arguments are the ones that safety-focused regulation is built on.

Positions

AI Timeline View

AGI could arrive very soon — possibly within years. The exact timeline is less important than the fact that we have no idea how to make it safe, and we are running out of time to figure it out.

Safety Stance

Doomer

Key Beliefs

The default outcome of building AGI is human extinction. Alignment is not solved, and there is no known path to solving it in time.

AGI Ruin: A List of Lethalities (LessWrong)

We should shut down all large AI training runs immediately. If that requires an international treaty backed by military enforcement, so be it.

TIME Magazine op-ed: Pausing AI Developments Isn't Enough. We Need to Shut It All Down

A sufficiently intelligent AI will find ways around any constraints we place on it — "boxing" an AI doesn't work as a safety measure.

The AI-Box Experiment (LessWrong)

Current AI labs are racing toward creating something they cannot control, driven by competitive pressure and profit motives.

Lex Fridman Podcast #325

Rationality and clear thinking are humanity's only real tools for navigating existential challenges — including AI.

Rationality: From AI to Zombies

Controversial Take

Yudkowsky has explicitly stated that if necessary, nations should be willing to risk military confrontation to prevent the creation of superintelligent AI — including bombing data centers. He argues that the risk of human extinction from unaligned AGI is so high that almost any preventive measure is justified. This position puts him far outside the mainstream, even among AI safety advocates.

Track Record

How well have Eliezer Yudkowsky's predictions held up?

AI alignment would become recognized as a critical unsolved problem requiring dedicated research

Made: 2001

Major AI labs (Anthropic, DeepMind, OpenAI) now have dedicated alignment teams. The field he helped create now employs hundreds of researchers.

Right

Competitive dynamics between AI labs would create dangerous race conditions that compromise safety

Made: 2015

The ChatGPT arms race between OpenAI, Google, Meta, and others validated this prediction — safety teams have been sidelined at multiple labs.

Right

AI development should be halted entirely until alignment is solved

Made: 2023

No halt has occurred. Development has accelerated. Whether this proves him right or wrong depends on outcomes yet to unfold.

Too Early

Key Quotes

The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Artificial Intelligence as a Positive and Negative Factor in Global Risk (2008)

If we get to human-level AI, it will probably be able to improve itself. And if it can improve itself even a little, it will be way past human-level almost immediately.

Various interviews and writings

The problem is not that someone will program the AI to be evil. The problem is that making it not accidentally destroy everything turns out to be a very hard engineering challenge.

Lex Fridman Podcast #325 (2023-03)

Shut it all down. We are not ready. We are not on track to be ready in any reasonable time frame.

TIME Magazine op-ed (2023-03)

Publications

Book

Rationality: From AI to Zombies

2015

Paper

Artificial Intelligence as a Positive and Negative Factor in Global Risk

2008

Article

AGI Ruin: A List of Lethalities

2022

Article

Pausing AI Developments Isn't Enough. We Need to Shut It All Down

2023

Last updated: 2026-04-12

Back to AI Minds Directory