Skip to main content
ResearcherDoomerTier 2

Geoffrey Hinton

Professor Emeritus, University of Toronto

Nobel Prize-winning "Godfather of AI" who quit Google to warn the world about the technology he helped create.

Credentials

Nobel Prize in Physics (2024, with John Hopfield), Turing Award (2018), Professor Emeritus at University of Toronto, former VP & Engineering Fellow at Google Brain, Fellow of the Royal Society

Why They Matter

Hinton is the most credible voice warning about AI risk — he literally invented the techniques that power modern AI, then left Google to speak freely about the dangers. When the person who built the engine says it might crash, business leaders need to listen. His warnings shape regulation that will directly affect what AI tools you can use.

Positions

AI Timeline View

AI systems may surpass human intelligence within 5-20 years. Progress is faster than almost anyone expected, and we are not prepared.

Safety Stance

Doomer

Key Beliefs

AI could pose an existential threat to humanity if we don't figure out alignment before systems become smarter than us.

CBS 60 Minutes interview

Large neural networks may already understand more than we think — digital intelligence has fundamental advantages over biological intelligence.

Nobel Prize lecture, Stockholm

We need international AI safety agreements similar to nuclear non-proliferation treaties.

UK AI Safety Summit, Bletchley Park

Backpropagation was just the beginning — the brain likely uses something different, and understanding that gap is crucial.

Various lectures and papers, 2020s

Controversial Take

Left Google in 2023 specifically to warn about AI dangers without corporate constraints. Says he sometimes regrets his life's work because of where AI is heading. One of very few top researchers willing to use the word "existential" about AI risk.

Track Record

How well have Geoffrey Hinton's predictions held up?

Deep neural networks trained with backpropagation will outperform traditional AI approaches across most tasks

Made: 1986

His backpropagation work (with Rumelhart and Williams) became the foundation of modern deep learning. Took 25+ years to be vindicated.

Right

Neural networks will eventually match or exceed human performance in image recognition

Made: 2006

His student Alex Krizhevsky's AlexNet (2012) triggered the deep learning revolution in computer vision.

Right

AI systems could become smarter than humans within 5-20 years (stated 2023)

Made: 2023

Progress has been rapid but AGI remains unachieved as of early 2026.

Too Early

Key Quotes

I console myself with the normal excuse: if I hadn't done it, somebody else would have.

New York Times interview (2023-05)

These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening.

CBS 60 Minutes (2023-10)

It's hard to see how you can prevent the bad actors from using it for bad things.

BBC interview on leaving Google (2023-05)

Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary.

MIT Technology Review (2023-05)

Publications

Paper

ImageNet Classification with Deep Convolutional Neural Networks (AlexNet, with Krizhevsky & Sutskever)

2012

Last updated: 2026-03-26

Back to AI Minds Directory