Should we fear superintelligence?
Superintelligence — AI that surpasses human cognitive ability across every domain — is either the greatest threat humanity has ever faced or an overblown sci-fi scenario distracting from real problems. The people who built the foundations of modern AI are on both sides. How you think about this shapes how seriously you take AI safety, governance, and long-term business planning.
Where They Stand
Existential risk
Geoffrey Hinton has said he estimates a 10-20% probability that AI leads to human extinction, and he considers that probability terrifyingly high for a technology we're actively building. His reasoning: once an AI system is more intelligent than humans, we have no reliable mechanism to ensure it remains aligned with human values — and it will be better at manipulating us than we are at controlling it. Yoshua Bengio, the third member of the "Godfathers of Deep Learning" trio, published a position paper arguing that advanced AI systems could pursue goals misaligned with humanity's, and that the scientific community has a moral obligation to treat this risk seriously. Connor Leahy at Conjecture frames it in stark terms: we are building a "second intelligent species" without understanding alignment, and the default outcome of creating something smarter than you is that it gets what it wants, not what you want. Jan Leike led OpenAI's superalignment team and resigned in May 2024, publicly stating that OpenAI was prioritising "shiny products over safety" — he then joined Anthropic, arguing that alignment research is the most important problem in the world. This camp believes the risk is not certain but is high enough (even at 5-10%) that it warrants Manhattan Project-scale investment in alignment research.
Serious but manageable
Dario Amodei has described superintelligence risk as a "genuinely serious concern" but believes it can be addressed through disciplined research — hence Anthropic's focus on Constitutional AI, interpretability, and responsible scaling policies that establish safety checkpoints at each capability level. Demis Hassabis calls for treating AI safety with the same rigour as other high-stakes engineering disciplines (aviation, nuclear), arguing that systematic testing and evaluation frameworks can keep pace with capabilities if properly resourced. Mustafa Suleyman in "The Coming Wave" argues containment of superintelligent AI is extremely difficult but not impossible — it requires unprecedented international cooperation, new institutions, and technical safeguards developed in parallel with capabilities. Ilya Sutskever, the former OpenAI Chief Scientist who co-led the superalignment initiative before departing to found Safe Superintelligence Inc. (SSI), believes superintelligence is coming and that building it safely is the defining technical challenge of our era — hence dedicating his entire new company to that single problem. This camp takes the risk very seriously but believes human ingenuity and proper institutional design can manage it.
Not a real concern now
Yann LeCun has been the most vocal critic of superintelligence fears among top-tier researchers. He argues that current AI architectures (including LLMs) are fundamentally incapable of the kind of general reasoning, planning, and world-modelling that superintelligence would require. Worrying about superintelligence today, he says, is like worrying about overpopulation on Mars — the prerequisite technologies don't exist yet, and we have no idea how to build them. Andrew Ng has called the focus on existential risk a dangerous distraction from real, present-day AI harms like bias, surveillance, and misinformation. He's argued that the "AI extinction" narrative is being strategically promoted by large labs to justify regulatory frameworks that would entrench their monopoly positions. François Chollet's position is grounded in his work on measuring intelligence: he argues that current AI systems don't demonstrate anything close to general intelligence, that scaling LLMs won't get us there, and that the jump from "good at pattern matching" to "superintelligent" requires fundamental breakthroughs we haven't even conceptualised. Their collective message: focus on the real harms AI is causing today, not hypothetical scenarios from science fiction.
Fear is counterproductive
Sam Altman has acknowledged superintelligence risk but consistently frames fear as the wrong response. He argues that the best way to ensure AI benefits humanity is to build it ourselves — carefully, iteratively, with public input — rather than to cede the field to less responsible actors by slowing down out of fear. His framing: OpenAI exists specifically because the worst outcome is superintelligence built in secret by someone with bad values. Mark Zuckerberg dismisses what he calls the "doomer" narrative, arguing that concentrating AI development in a few closed labs (which fear-based regulation would cause) is itself the real danger. He advocates for open-source AI precisely because distributed development is harder to weaponise than centralised development. Jensen Huang takes a technologist's view: every transformative technology (electricity, nuclear, the internet) generated existential fears, and humanity navigated all of them. He argues that AI will create so much value — in healthcare, science, productivity — that fear-driven paralysis is the actual threat. This camp doesn't deny risks exist but argues that fear leads to worse outcomes than courage.
Patrick's Take
Here's the thing nobody on stage will tell you: the superintelligence debate is dominated by people in San Francisco talking to each other. It sounds completely alien when you're running a 20-person company in Shah Alam trying to figure out whether to use ChatGPT for your customer emails. And yet — it matters more than you think. Not because superintelligence is coming next Tuesday. It probably isn't. But because the FEAR of superintelligence is already shaping the regulations, corporate strategies, and investment flows that will determine which AI tools you have access to and at what price. When Hinton warns about extinction risk, governments listen. When governments listen, they regulate. When they regulate, some tools get restricted, some get more expensive, and the compliance burden falls hardest on small businesses who can't afford AI governance teams. My practical take for Malaysian business owners: you don't need to have an opinion on superintelligence. You need to have a plan for a world where AI regulation tightens significantly within 3-5 years. That means: adopt tools now while access is easy, build internal capability so you're not dependent on any single provider, and stay informed enough to see regulatory changes coming before they hit. The existential risk debate is for philosophers and policy makers. Your job is to make your business resilient regardless of which camp turns out to be right.
What This Means for Your Business
The superintelligence debate directly shapes AI regulation, which directly affects your business. The EU AI Act (already in force) categorises AI systems by risk level and imposes compliance requirements — and other countries including ASEAN members will follow with their own frameworks. If fear-driven regulation wins, expect licensing requirements, mandatory audits, and restrictions on which AI tools can be used in which industries. For Malaysian businesses: this means potential friction when using AI in healthcare, finance, education, and government contracting. Start documenting your AI usage now — which tools, which data, which decisions — because auditors will eventually ask. Also: the companies investing in AI safety (Anthropic, DeepMind) tend to produce more reliable, less hallucination-prone models. For business use cases where accuracy matters (legal, medical, financial), this safety focus actually benefits you as an end user.
What to Actually Worry About
Don't lose sleep over robot apocalypse scenarios. Do pay attention to three things: First, AI regulation is coming to Malaysia — MDEC has already published AI governance guidelines, and binding rules will follow. The businesses that documented their AI usage from the start will have an easy time. The ones that didn't will scramble. Second, the concentration of AI power in a handful of US and Chinese companies is a real strategic risk for any business that depends on their APIs. Build provider diversity into your stack. Third, the near-term misuse risks (deepfakes, scam automation, misinformation) are already affecting Malaysian businesses — I've seen AI-generated scam emails targeting local companies that are nearly indistinguishable from real correspondence. Focus your safety budget on these concrete threats, not theoretical superintelligence.
Featured Minds in This Debate
Geoffrey Hinton
Professor Emeritus, University of Toronto
Yoshua Bengio
Founder & Scientific Director, Mila (Quebec AI Institute)
Connor Leahy
CEO, Conjecture
Jan Leike
Alignment Science Lead, Anthropic
Dario Amodei
CEO & Co-Founder, Anthropic
Demis Hassabis
CEO & Co-Founder, Google DeepMind
Mustafa Suleyman
CEO, Microsoft AI, Microsoft
Ilya Sutskever
Co-Founder & Chief Scientist, Safe Superintelligence Inc. (SSI)
Yann LeCun
VP & Chief AI Scientist, Meta
Andrew Ng
Founder & CEO, DeepLearning.AI / AI Fund
Francois Chollet
Software Engineer & AI Researcher, Google
Sam Altman
CEO, OpenAI
Mark Zuckerberg
Founder & CEO, Meta
Jensen Huang
Founder, President & CEO, NVIDIA
Last updated: 2026-03-26
←Back to AI Minds Directory