Skip to main content
AI Debate Map

Should AI development slow down?

In March 2023, over 30,000 people signed a letter calling for a 6-month pause on training AI systems more powerful than GPT-4. It split the AI world in two. This isn't an abstract policy debate — if regulation slows development in the West, it shifts power to China. If it doesn't slow down, we might build systems we can't control. Your business sits in the middle of this tug-of-war.

Where They Stand

Pause / Regulate NOW

Two Turing Award winners — Hinton and Bengio — are in this camp, which gives it enormous credibility. Hinton left Google specifically so he could advocate for slowing down without corporate constraints. Bengio, who helped invent deep learning alongside Hinton and LeCun, has called for international governance frameworks similar to nuclear non-proliferation treaties. Elon Musk co-signed the famous 2023 pause letter (though critics note he simultaneously founded xAI and began building Grok, raising questions about sincerity). Connor Leahy of Conjecture argues that the current development pace makes alignment research impossible — we're building faster than we can understand. The core argument: the potential downside (existential risk, mass unemployment, authoritarian misuse) is so catastrophic that caution is the only rational response, even if it means slower economic gains.

Cautious but continue

This is the "responsible scaling" camp. Dario Amodei left OpenAI specifically because he felt safety wasn't being taken seriously enough and founded Anthropic around "Constitutional AI" and responsible scaling policies — but he's firmly against a full pause, arguing it would just hand the lead to less safety-conscious actors. Demis Hassabis at Google DeepMind advocates for continued research with robust safety testing at each capability threshold. Mustafa Suleyman (co-founder of DeepMind, now at Microsoft AI) wrote "The Coming Wave" arguing that containment is almost impossible but we must try — through licensing, auditing, and international cooperation. Jan Leike, who led OpenAI's superalignment team before resigning over safety disagreements and joining Anthropic, represents those who believe alignment work must happen inside the labs, not from the sidelines. Their position: stopping isn't realistic, but recklessness is inexcusable.

Full speed ahead

Sam Altman has consistently argued that the benefits of AI — curing diseases, solving climate change, democratising education — are so enormous that slowing down would itself be a moral failure. He frames OpenAI's mission as getting to AGI safely but quickly, and lobbies for light-touch regulation that doesn't impede innovation. Yann LeCun, despite being a pioneer, vocally opposes the pause movement — he argues current AI isn't dangerous enough to warrant it and that the "AI doom" narrative is driven by hype, not science. Mark Zuckerberg bets heavily on open-source AI (Llama) and argues that broad access makes AI safer, not more dangerous — and that Meta's open approach is better for the world than closed labs hoarding power. Jensen Huang, whose NVIDIA GPUs power virtually all AI training, sees AI as the next industrial revolution and argues that slowing down means falling behind economically. The core argument: AI progress saves lives, creates wealth, and the risks are manageable with good engineering.

It's too late to slow down

Mo Gawdat, former Chief Business Officer at Google X, occupies a uniquely pessimistic-yet-pragmatic position. He argues that the competitive dynamics between companies (OpenAI vs Google vs Meta vs China) and between nations (US vs China) make any meaningful slowdown impossible. His analogy: it's like asking all countries to simultaneously stop developing nuclear weapons during the Cold War — game theory makes it irrational for any single player to pause. But unlike the nuclear pause advocates, Gawdat doesn't think regulation is the answer either. His conclusion is stark: AI will continue accelerating regardless of what we want, so the only productive response is to focus on shaping its values and ensuring the humans building it are thoughtful. He's essentially saying the horse has left the barn, and the debate about whether to close the barn door is moot.

Patrick's Take

I'll be honest with you — I find myself sympathising with every camp here, which usually means the question itself is slightly wrong. "Should AI slow down?" treats AI development like a single lever someone can pull. In reality, there are thousands of labs, millions of developers, and dozens of governments all moving at different speeds with different incentives. What I tell my training clients in KL and Penang: the slow-down debate is above your pay grade and mine. We're not going to influence whether OpenAI or DeepMind pauses research. But what IS in your control is how prepared your business is for whatever speed AI arrives at. The Malaysian SMEs I work with who are thriving aren't the ones debating regulation on LinkedIn — they're the ones who trained their team last quarter and are already seeing results. The one thing I'll say is this: Musk's position deserves a raised eyebrow. He signed the pause letter in March 2023 and launched xAI in July 2023. That's not principled caution — that's wanting your competitors to slow down while you catch up. Watch what people do, not what they say. The people actually building safety infrastructure (Amodei, Leike, Hassabis) are more credible than the people writing open letters.

What This Means for Your Business

If a regulatory pause happens (unlikely but possible in the EU), expect a brief window where early adopters lock in competitive advantage while laggards celebrate the reprieve. If development continues at current pace — the most likely scenario — AI tools will get significantly better every 6-12 months, meaning the cost of waiting rises exponentially. For Malaysian businesses specifically, MDEC and government AI initiatives tend to lag Silicon Valley by 18-24 months, so by the time local regulation catches up, the tools will already be embedded in your competitors' workflows. The practical move: don't wait for regulatory clarity. Adopt tools that exist today, build internal AI literacy, and treat the slow-down debate as background noise that doesn't change your immediate priorities.

What to Actually Worry About

Worry less about whether AI slows down globally and more about whether YOUR adoption is too slow relative to your industry. The real risk for a Malaysian SME isn't superintelligent AI — it's a competitor in Singapore or Jakarta who automated their customer service, content creation, or financial analysis six months before you did. If regulation does arrive (and some form eventually will), the businesses that already understand AI will navigate it easily. The ones who waited for regulatory clarity before starting will be doubly behind — no AI skills AND new compliance requirements to learn. Start now, start small, build the muscle.

Last updated: 2026-03-26

Back to AI Minds Directory