Skip to main content
BuilderCautiousTier 3

Amanda Askell

Character Lead, Anthropic

The philosopher who designs Claude's personality — she decides how one of the world's most capable AI systems should think, speak, and behave.

Credentials

PhD in Philosophy from New York University (NYU), previously researcher at OpenAI on AI alignment and policy, now leads character design and personality at Anthropic for Claude.

Why They Matter

Askell sits at the intersection of philosophy and AI engineering — she's the person who turns abstract questions like "should an AI be honest?" into concrete system behavior. For business leaders deploying AI, her work determines the ethical guardrails and conversational style of one of the most widely used AI assistants. If you care about how AI interacts with your customers, her thinking matters.

Positions

AI Timeline View

AI systems are becoming increasingly capable, but getting their values and behavior right is as important as making them smarter. The alignment problem is not just technical — it's deeply philosophical.

Safety Stance

Cautious

Key Beliefs

AI character design is a philosophical discipline — how an AI should behave requires engaging with centuries of ethical theory, not just engineering intuition.

Anthropic blog and research

Constitutional AI (using principles to guide AI behavior) is a more scalable approach to alignment than pure RLHF from human feedback alone.

Anthropic research on Constitutional AI

AI systems should be honest, helpful, and harmless — but the tensions between these goals are genuine and require careful philosophical reasoning to navigate.

Anthropic's Claude character documentation

The way we train AI to communicate shapes public trust in AI — getting character right is not a cosmetic exercise, it's foundational.

Public talks and Anthropic research

Controversial Take

Askell argues that AI personality design is a legitimate and important philosophical discipline, not mere prompt engineering or marketing. Critics say anthropomorphising AI systems with "character" is misleading; she counters that all AI systems have behavioral dispositions, and it's better to design them intentionally than let them emerge by accident.

Track Record

How well have Amanda Askell's predictions held up?

Carefully designed AI character and values will become a key competitive differentiator for AI companies

Made: 2022

Claude's reputation for being thoughtful, nuanced, and well-behaved has become Anthropic's strongest brand asset — widely seen as the "safest" major AI assistant.

Right

Philosophy PhDs will become essential hires at AI labs, not just ML engineers

Made: 2021

Anthropic, OpenAI, and DeepMind have all hired philosophers and ethicists, though the field is still heavily dominated by engineers.

Partially Right

Key Quotes

If you're building a system that millions of people will talk to every day, the question of what kind of entity it should be is not trivial.

[SOURCE NEEDED]

Character is not a veneer you apply at the end. It's a design decision that affects everything from safety to usefulness.

[SOURCE NEEDED]

Philosophy isn't a luxury in AI development — it's load-bearing infrastructure.

[SOURCE NEEDED]

We want Claude to be genuinely helpful, genuinely honest, and to avoid causing harm. The hard part is what to do when those goals conflict.

Anthropic research and public communications

Publications

Article

The Claude Character

2024

Paper

Moral uncertainty and its consequences

2019

Last updated: 2026-04-12

Back to AI Minds Directory