Should governments, companies, or the open community regulate AI?
Regulation is coming — the only question is who writes the rules and who they benefit. Get this wrong and you either stifle innovation (Europe's risk) or allow unchecked harm (America's risk). For businesses in Malaysia and Southeast Asia, the regulatory patchwork that emerges will directly determine what AI tools you can use, what compliance costs you face, and whether you compete on a level playing field with Big Tech.
Where They Stand
Government regulation is essential
Mustafa Suleyman, in "The Coming Wave," argued that AI represents a technology so powerful that leaving it to market self-regulation is as reckless as letting pharmaceutical companies self-certify drug safety. He advocates for government-mandated licensing of frontier AI models above a capability threshold, mandatory safety testing before deployment, and international coordination modelled on nuclear non-proliferation frameworks. Geoffrey Hinton, since leaving Google in 2023, has called for government regulation with increasing urgency, arguing that AI companies are locked in a race where competitive pressure overrides safety caution. He has specifically advocated for mandatory interpretability research and compute governance — regulating the hardware that trains frontier models as a choke point. Yoshua Bengio has been the most active Turing Award winner in the policy space, co-authoring proposals for international AI governance and advising the Canadian government. He has called for mandatory pre-deployment safety evaluations for frontier models and public sector investment in safety research to counterbalance the private sector's profit motive. All three argue that industry self-regulation has a 100% historical failure rate when profits conflict with public safety.
Industry self-regulation works best
Sam Altman has navigated this issue carefully: he publicly calls for regulation (he testified before the US Senate in May 2023 asking for it) while simultaneously lobbying against specific proposals that would constrain OpenAI. His preferred framework is "light-touch" — licensing for frontier labs, voluntary safety commitments, and industry-led standards bodies. Critics note this conveniently raises barriers to entry for smaller competitors. Dario Amodei has positioned Anthropic's Responsible Scaling Policy (RSP) as a model for industry self-governance: internal red lines that trigger safety protocols when models reach certain capability thresholds. He argues this is more technically informed than anything a legislature could produce. Sundar Pichai has advocated for a "balanced approach" that avoids the EU's prescriptive model, supporting voluntary commitments (like the White House AI Safety Commitments Google signed) and sector-specific regulation rather than broad horizontal rules. The common thread: all three believe the people building AI understand the risks better than legislators, and that heavy-handed regulation will push innovation offshore without improving safety.
Open community and distributed governance
Andrew Ng has been the most forceful voice warning that AI regulation is being captured by incumbent big labs to crush open-source competition. He draws a direct line between OpenAI and Google lobbying for licensing requirements and the effect those requirements would have on startups and academics who can't afford compliance teams. His argument: the biggest risk to AI safety isn't too little regulation but too much of the wrong kind — regulation that entrenches monopolies. Yann LeCun extends this to argue that open-source AI, with its global community of reviewers, is inherently safer than closed systems that only a handful of employees can audit. He has called proposals to restrict open-source model weights "insane" and compared them to banning the printing press because books might contain dangerous ideas. Clément Delangue, CEO of Hugging Face, has built the largest open-source AI platform and advocates for community-driven standards, transparent model cards, and democratic governance of AI development — arguing that the best regulation comes from the research community itself, not from legislators who can't distinguish a transformer from a toaster.
Regulation is already too late
Eliezer Yudkowsky has argued that conventional regulation is a completely inadequate response to AI risk — it's like forming a committee to regulate an asteroid impact. In his view, the only effective "regulation" for frontier AI would be a global moratorium on training runs above a certain compute threshold, enforced with the same seriousness as nuclear weapons treaties, including military enforcement against rogue labs. Connor Leahy, CEO of Conjecture, takes a less extreme but still urgent position: current regulatory proposals are theatre, designed to look like action while permitting the race to continue. He argues that by the time meaningful regulation passes through democratic legislatures, the models it would regulate will already be two generations ahead. Both believe that the speed of AI capability development has fundamentally outpaced the speed of democratic governance, and that voluntary industry commitments are worth less than the paper they're printed on when billions of dollars in market value are at stake.
Patrick's Take
Let me tell you what this debate looks like from Kuala Lumpur instead of San Francisco. Malaysia has no comprehensive AI regulation. Neither does most of ASEAN. The EU has the AI Act. The US is doing executive orders and voluntary commitments. China has its own framework. And Malaysian businesses are caught in the middle, often using American AI tools to serve regional customers under no clear legal framework. What I tell the companies I train: don't wait for Malaysian regulation to tell you what's responsible. Build your own internal AI governance NOW — not because a law requires it, but because your customers and partners will increasingly demand it. The companies I've seen do this well create simple internal policies: what data can go into AI tools, what outputs need human review, how AI decisions are documented. It takes a day to draft, not a year. The regulation debate matters most to you as a competitive signal. When the EU AI Act creates compliance costs for European companies, that's an opportunity for Malaysian businesses to serve those markets more nimbly. When American companies get no guardrails, watch for the backlash that creates demand for "responsible AI" branding. Position yourself ahead of the curve, not behind it.
What This Means for Your Business
Regulation is a when, not an if — even for Malaysia. The practical move is to build AI governance into your operations now while it's cheap and voluntary, rather than scrambling later when it's mandatory and expensive. Start with three things: (1) an AI usage policy for your team documenting which tools are approved and what data can be shared with them, (2) a human-in-the-loop requirement for any AI output that affects customers, finances, or legal matters, and (3) basic record-keeping of AI-assisted decisions. These three steps cost essentially nothing to implement but will put you ahead of 99% of Malaysian SMEs when regulation arrives. Companies that can demonstrate responsible AI practices will also find it easier to win contracts with MNCs and government agencies that are already requiring AI governance from their vendors.
What to Actually Worry About
The real risk for Malaysian businesses isn't over-regulation — it's regulatory fragmentation. If the EU, US, China, and ASEAN all adopt different AI frameworks, companies operating across borders face a compliance nightmare. A Malaysian company using OpenAI's API to serve European customers could find itself subject to the EU AI Act without knowing it. The practical concern: start tracking which AI regulations apply to your markets, not just your country. And watch for Malaysia's own framework — MDEC and the Ministry of Science have been signalling that something is coming. The companies that participate in consultation processes now will shape rules that work for them, rather than having rules imposed on them.
Featured Minds in This Debate
Mustafa Suleyman
CEO, Microsoft AI, Microsoft
Geoffrey Hinton
Professor Emeritus, University of Toronto
Yoshua Bengio
Founder & Scientific Director, Mila (Quebec AI Institute)
Sam Altman
CEO, OpenAI
Dario Amodei
CEO & Co-Founder, Anthropic
Sundar Pichai
CEO, Alphabet / Google
Yann LeCun
VP & Chief AI Scientist, Meta
Andrew Ng
Founder & CEO, DeepLearning.AI / AI Fund
Clem Delangue
CEO & Co-Founder, Hugging Face
Connor Leahy
CEO, Conjecture
Eliezer Yudkowsky
Co-Founder & Senior Research Fellow, Machine Intelligence Research Institute (MIRI)
Last updated: 2026-04-13
←Back to AI Minds Directory