Skip to main content
AI Debate Map

Should AI models be open-source or closed?

This debate determines who controls the most powerful technology ever built. If AI stays closed, a handful of companies become gatekeepers to intelligence itself. If it goes open-source, anyone — including bad actors — gets access. For business owners, this directly affects your costs, vendor lock-in, and strategic options.

Where They Stand

Open source

Mark Zuckerberg has made Meta's Llama models the flagship of the open-source AI movement, releasing Llama 2 and Llama 3 with weights available for commercial use. His argument is partly philosophical (open ecosystems win long-term, as Linux proved) and partly strategic (Meta benefits from commoditising the AI layer since its business is social media, not selling API access). Yann LeCun, as Meta's Chief AI Scientist, provides the intellectual backbone — he argues that concentrating AI power in a few closed labs is far more dangerous than open access, and that open-source enables the global research community to find and fix safety issues faster. Andrej Karpathy, former OpenAI and Tesla AI lead, has become a vocal advocate for open models and AI education, arguing that transparency and accessibility are essential for both safety and innovation. He's demonstrated through his own educational content that open knowledge accelerates the entire field. The open camp's core claim: sunlight is the best disinfectant, and monopolies on intelligence are the real existential risk.

Closed for safety

OpenAI (despite the name) and Anthropic both keep their frontier model weights proprietary. Sam Altman argues that as models become more capable, releasing weights is irresponsible — you can't un-release a model, and fine-tuning can strip safety guardrails in hours. OpenAI transitioned from a non-profit to a capped-profit structure partly to fund the safety research they argue is only possible with closed development. Dario Amodei at Anthropic takes a more nuanced position: he supports transparency in research papers and safety techniques, but believes model weights for the most capable systems should be restricted until we have better understanding of misuse potential. Anthropic's Responsible Scaling Policy creates capability thresholds — if a model can do something genuinely dangerous (like help create bioweapons), it shouldn't be downloadable. Critics point out the obvious conflict of interest: keeping models closed also keeps the revenue stream locked in.

It's complicated

Demis Hassabis at Google DeepMind occupies a middle ground — Google publishes research prolifically but keeps its most capable model weights proprietary. Hassabis has argued for a "structured access" approach where researchers and vetted organisations can use frontier models through APIs without full weight release. Andrew Ng has been one of the strongest voices against what he calls "regulatory capture disguised as safety" — he argues that big labs lobbying to restrict open-source AI are really trying to protect their moats. But he also acknowledges that some capabilities warrant caution. Fei-Fei Li, the Stanford professor who created ImageNet and catalysed the deep learning revolution, advocates for a middle path: open research, open datasets, open benchmarks, but graduated access to the most capable models based on demonstrated safety practices. The nuanced camp recognises that "open vs closed" is a false binary — there's a spectrum from fully open weights to API-only access, and the right answer might be different for different capability levels.

Open is dangerous

Geoffrey Hinton stands largely alone among top-tier AI researchers in arguing that open-source AI models represent a genuine danger. His concern isn't about current models — it's about what happens when open-source models become capable enough to help bad actors create bioweapons, design cyberattacks, or generate sophisticated disinformation at scale. Once weights are released, there's no recall mechanism, no patch, no update you can push. Hinton argues that the analogy to open-source software breaks down because code can be audited line by line, while neural network weights are opaque — nobody fully understands what a 70-billion-parameter model has learned or what it can be fine-tuned to do. His position is unpopular in the research community but carries weight given his foundational contributions to the field.

Patrick's Take

This is the debate I get the most questions about in my training sessions, and it's the one where the business implications are most direct. So let me cut through the ideology. For 95% of Malaysian SMEs, the open-source vs closed debate is already settled: you should be using BOTH. Use closed APIs (OpenAI, Anthropic, Google) for your critical workflows where you need reliability, support, and the latest capabilities. Use open-source models (Llama, Mistral, Qwen) for cost-sensitive, high-volume tasks where you can run inference locally or on cheap cloud instances. The companies I train don't pick a side — they pick the right tool for each job. Here's what actually matters for your business: vendor lock-in. If you build your entire operation on OpenAI's API and they raise prices 5x (which they can, because they're burning cash), can you switch? If you've built on open-source, you own your infrastructure. The smartest thing I've seen Malaysian companies do is build abstraction layers — use the best model for each task but make it swappable. The philosophical debate about AI safety and open-source is fascinating, but your CFO cares about margins, not manifestos.

What This Means for Your Business

Open-source models are already good enough for most business tasks — translation, summarisation, classification, customer service — at a fraction of the cost of API calls to closed providers. If you're spending more than RM 2,000/month on AI API costs, you should be evaluating whether open-source alternatives could handle 50-70% of that volume. Closed models maintain an edge for frontier capabilities (complex reasoning, code generation, creative tasks), so the practical strategy is a hybrid approach. Watch Meta's Llama releases closely — each generation closes the gap with GPT-4 class models. For data-sensitive industries (legal, medical, financial), open-source models you can run on your own infrastructure solve the data sovereignty problem that makes many Malaysian companies hesitant about cloud AI.

What to Actually Worry About

The real risk isn't philosophical — it's practical. If you go all-in on one closed provider and they change pricing, terms of service, or rate limits, your business is exposed. If you go all-in on open-source, you need in-house technical capacity to deploy and maintain models, which most Malaysian SMEs don't have yet. The sweet spot is using closed APIs through simple integrations now (low barrier, fast value) while building toward open-source capability over time. Also watch the regulatory landscape: the EU AI Act treats open-source models differently from closed ones, and whatever framework Malaysia eventually adopts will likely follow suit. Being fluent in both gives you optionality.

Last updated: 2026-03-26

Back to AI Minds Directory