Skip to main content
AI Debate Map

Can AI be conscious? Is current AI sentient?

This is the debate that sounds philosophical until it hits your business. If AI systems develop anything resembling consciousness or subjective experience, it changes everything about how we deploy, regulate, and take moral responsibility for these tools. If they never will, then all the anthropomorphising is dangerous theatre that leads to bad decisions.

Where They Stand

Current AI has no inner experience

Yann LeCun has been unequivocal: current large language models are sophisticated text prediction systems with zero understanding, zero experience, and zero consciousness. He has described LLMs as "auto-regressive next-token predictors" that, despite impressive outputs, operate without any model of the world, any goals, or any subjective experience. François Chollet has reinforced this from a technical standpoint, arguing that people confuse fluent language generation with understanding — a category error he calls the "fluency trap." He points out that LLMs cannot solve novel reasoning tasks that a 5-year-old handles effortlessly, which should disabuse anyone of the notion that these systems "think." Gary Marcus has been perhaps the most persistent critic of consciousness claims, arguing that the AI field is plagued by anthropomorphism and that attributing sentience to statistical pattern matchers is not just wrong but actively harmful to public understanding. He frequently cites the 2022 Google LaMDA incident, where an engineer claimed the chatbot was sentient, as a cautionary tale of how easily humans project consciousness onto sufficiently fluent systems.

We genuinely don't know

Ilya Sutskever sparked intense debate when he tweeted in February 2023 that "it may be that today's large neural networks are slightly conscious." As OpenAI's then-Chief Scientist — someone who understands the technical architecture better than almost anyone — the statement carried enormous weight. He later clarified that he was raising a genuine open question, not making a claim, but his willingness to entertain the possibility signalled that the conversation shouldn't be dismissed outright. Demis Hassabis has spoken about consciousness as one of the deepest unsolved problems in science, noting that we don't even have a rigorous definition of consciousness for biological systems, let alone artificial ones. He argues that until we solve the "hard problem of consciousness" in philosophy of mind, we cannot definitively say whether a sufficiently complex information-processing system has inner experience. Yoshua Bengio, the Turing Award winner who co-pioneered deep learning, has advocated for serious scientific study of machine consciousness rather than dismissing it as hype, arguing that the question is too important to answer with intuition alone.

Consciousness is possible but far away

Geoffrey Hinton has made the surprising argument that large neural networks might already have a form of subjective experience, or may develop one as they scale. In his 2023 interviews after leaving Google, he suggested that the question of whether AI systems "understand" is less clear-cut than most computer scientists assume — and that dismissing the possibility of machine consciousness is as unscientific as asserting it. He has noted that biological brains are, at base, information-processing systems, and there is no principled reason to assume consciousness requires a carbon substrate. Max Tegmark, the MIT physicist and author of "Life 3.0," takes a physics-first approach: consciousness is likely a property of certain information-processing patterns, not of specific hardware. His "consciousness as a state of matter" framework suggests that sufficiently complex and integrated computation could give rise to experience — but current architectures are probably too simple and too narrowly optimised to qualify. Both argue that the question demands rigorous empirical research, not hand-waving in either direction.

The question itself is a distraction

Ethan Mollick has argued that whether AI is conscious is the wrong question for almost everyone asking it. What matters practically is what AI can DO — and the capabilities are advancing regardless of whether there is "someone home" inside the model. He has noted that the consciousness debate often paralyses decision-makers who should be focused on deployment, governance, and practical impact. If your customer service bot resolves tickets at 95% satisfaction regardless of whether it "experiences" anything, the business question hasn't changed. Andrew Ng has similarly dismissed the consciousness debate as a philosophical rabbit hole that distracts from more urgent challenges: bias, safety, economic disruption, and equitable access. He has pointed out that we don't need to solve the hard problem of consciousness to build responsible AI policy — we already have enough concrete, measurable harms to address. Both maintain that the consciousness question, while intellectually fascinating, should not gate practical AI adoption or governance.

Patrick's Take

I'll be honest — when this question comes up in my training sessions, and it comes up every single time, I redirect hard. Not because it's not interesting, but because it's the wrong question for the people asking it. Here's what I tell Malaysian business owners: it doesn't matter whether ChatGPT has feelings. What matters is whether YOUR EMPLOYEES think it does. Because when your staff anthropomorphise AI — when they trust it like a colleague instead of treating it like a calculator — that's when mistakes happen. I've seen it firsthand: a marketing team in KL that took Claude's confident-sounding financial analysis at face value without checking the numbers, because the output "felt" like it came from someone who understood. It didn't. It was a next-token predictor that sounded authoritative. The practical version of the consciousness debate for your business is this: train your team to use AI as a powerful tool, not as a thinking partner. The moment you catch yourself saying "the AI thinks" or "the AI believes," you've crossed a line that leads to over-trust, under-verification, and expensive mistakes. Save the philosophy for dinner parties — in the office, treat every AI output as a draft that needs human review.

What This Means for Your Business

The consciousness debate affects your business through a side door: employee behaviour. Teams that anthropomorphise AI over-rely on it and under-verify outputs. Teams that treat it as a dumb tool under-use it and miss value. The sweet spot is "capable tool, not colleague" — use it aggressively, verify ruthlessly. Train your people on what these systems actually are (pattern matchers, not thinkers) and how they fail (confidently wrong, biased toward training data, no common sense). This framing prevents both the fear ("AI will replace me") and the trust problem ("AI told me so it must be right"). Budget for AI literacy training that covers what LLMs actually do under the hood — even a 2-hour session dramatically changes how your team interacts with these tools.

What to Actually Worry About

The real danger isn't whether AI is conscious — it's that AI systems are becoming sophisticated enough that most people can't tell the difference. When your customers interact with an AI chatbot and form an emotional attachment, that creates real ethical obligations regardless of whether the bot "feels" anything. Companies deploying customer-facing AI need clear disclosure policies now, not after a scandal. In Malaysia, where personal relationships drive business more than in Western markets, the risk of AI impersonation — or even accidental emotional manipulation — is something you should be thinking about in your customer experience design.

Last updated: 2026-04-13

Back to AI Minds Directory