Ilya Sutskever
Co-Founder & Chief Scientist, Safe Superintelligence Inc. (SSI)
OpenAI's former Chief Scientist who triggered the Sam Altman firing, then left to build "safe superintelligence" — arguably the most consequential researcher-turned-founder in AI.
Credentials
Co-founder of Safe Superintelligence Inc. (SSI), Co-founder & former Chief Scientist of OpenAI, co-creator of AlexNet (with Hinton), key contributor to GPT series, former researcher at Google Brain
Why They Matter
Sutskever is the technical mind behind OpenAI's biggest breakthroughs — he co-invented AlexNet, helped build GPT, and shaped the scaling laws strategy. Then he tried to fire Sam Altman over safety concerns, failed, and left to build SSI, a company focused solely on safe superintelligence. His moves signal that even the people building frontier AI think it's dangerous enough to warrant a completely new approach.
Positions
AI Timeline View
Superintelligence is the next step after AGI, and it may arrive sooner than most expect. Building it safely is the defining challenge of our time.
Safety Stance
Key Beliefs
Superintelligence is inevitable and will arrive relatively soon. The only question is whether we build it safely.
Scaling is the key insight — larger models with more data and compute reliably produce smarter systems.
OpenAI scaling papers and internal strategy (widely reported)
Safety and capability research cannot be done at the same company when there are commercial pressures — they need to be separated.
SSI founding thesis (implicit in leaving OpenAI)
Current AI systems may have a deeper understanding of the world than we give them credit for.
NeurIPS talk, 2023
Controversial Take
Was a key figure in the November 2023 attempt to fire Sam Altman as OpenAI CEO, reportedly over disagreements about safety and the pace of commercialisation. The boardroom drama became the biggest story in tech that year and exposed deep tensions between safety and commercial interests at AI labs.
Track Record
How well have Ilya Sutskever's predictions held up?
Deep convolutional neural networks will dramatically outperform traditional computer vision (AlexNet)
Made: 2012
AlexNet's ImageNet win in 2012 is widely considered the moment that launched the modern AI era.
Scaling up language models with more data and compute will lead to emergent capabilities
Made: 2018
The GPT series proved this comprehensively. GPT-3 and GPT-4 showed capabilities that surprised even their creators.
AI safety concerns at OpenAI would require structural intervention (the Altman firing)
Made: 2023
The board coup failed within days. Altman returned, safety-focused board members left, and OpenAI accelerated its commercial trajectory.
Key Quotes
“If you really believe in the possibility of superintelligence, then you should also believe that it might not be controllable.”
“The thing that I'm most excited about is also the thing that I'm most worried about.”
“Sequence to sequence learning will be a game-changer for natural language processing.”
Publications
ImageNet Classification with Deep Convolutional Neural Networks (AlexNet, with Krizhevsky & Hinton)
2012
Language Models are Unsupervised Multitask Learners (GPT-2)
2019
Connections
Agrees With
Disagrees With
Sam Altman
on Whether OpenAI's commercial direction is compatible with its safety mission — the fundamental disagreement that led to the board crisis
Yann LeCun
on Whether scaling current architectures can lead to superintelligence — LeCun says no
Andrew Ng
on Whether AI safety concerns are overblown — Sutskever's entire new company is built on the premise they are not
Debate Participation
Ilya Sutskever appears in these AI debates:
Last updated: 2026-03-26
←Back to AI Minds Directory