Connor Leahy
CEO, Conjecture
Self-taught AI researcher turned safety startup CEO — the most outspoken young voice arguing that AI development is racing toward catastrophe.
Credentials
CEO & Co-founder of Conjecture (AI safety startup), Co-founder of EleutherAI (open-source AI research group that built GPT-NeoX and GPT-J), self-taught ML researcher, prominent AI safety advocate and public speaker
Why They Matter
Leahy represents a new generation of AI insiders who are genuinely alarmed. He co-founded EleutherAI, which proved that open-source groups could build GPT-class models — then pivoted to safety when he saw what those models could become. For business leaders, his perspective is a useful counterweight to Silicon Valley hype: he argues that AI is powerful AND dangerous, and that businesses should prepare for a world where AI regulation tightens significantly.
Positions
AI Timeline View
We may have only a few years before AI systems become dangerously capable. The window for getting safety right is closing fast.
Safety Stance
Key Beliefs
AI development is a race to the edge of a cliff. We need to slow down or stop before we fall off.
Multiple public talks and interviews, 2023-2024
Current AI labs are engaged in a dangerous arms race where competitive pressure overrides safety concerns.
AI alignment is not just a technical problem — it's a governance and coordination problem. We need international treaties.
UK Parliament testimony and media appearances
Open-sourcing powerful AI models without safety guarantees is irresponsible, even though he helped build some of them (EleutherAI).
Interviews discussing his evolution from open-source AI to safety-first approach
Controversial Take
Openly calls himself a "doomer" and argues that there is a meaningful probability of human extinction from AI within our lifetimes. Claims the AI safety community is not alarmed enough, and that even other safety advocates underestimate the risk. His shift from building open-source AI (EleutherAI) to warning against it gives his arguments particular weight.
Track Record
How well have Connor Leahy's predictions held up?
Open-source groups could replicate large language models without big tech budgets
Made: 2020
EleutherAI built GPT-J (6B) and GPT-NeoX (20B), proving that frontier AI wasn't exclusive to deep-pocketed labs. This was followed by LLaMA, Mistral, and many others.
AI capabilities will advance faster than safety measures, creating a growing gap
Made: 2022
Capabilities have surged (GPT-4, Claude 3, Gemini) while alignment remains largely unsolved. The safety gap is widely acknowledged.
Governments will begin seriously regulating AI within 2-3 years
Made: 2023
The EU AI Act passed. The UK held the AI Safety Summit. But regulation remains fragmented and enforcement is weak.
Key Quotes
“We are building a god. We don't know how it works, we can't control it, and we're doing it as fast as possible. This is insane.”
“I helped build open-source AI models. Now I spend my time trying to make sure they don't destroy the world. That should tell you something.”
“The default outcome of building superintelligence is not a good one. The default outcome is extinction.”
“Saying "we'll figure out safety later" is like saying "we'll figure out the brakes after we've already driven off the cliff."”
Connections
Agrees With
Disagrees With
Yann LeCun
on Whether AI existential risk is a real concern — LeCun dismisses it, Leahy considers it the defining issue
Andrew Ng
on Whether AI safety regulation is regulatory capture or genuine necessity
Mark Zuckerberg
on Whether open-sourcing powerful AI models is responsible
Sam Altman
on Whether the pace of AI development at frontier labs is reckless
Debate Participation
Connor Leahy appears in these AI debates:
Last updated: 2026-03-26
←Back to AI Minds Directory