Anthropic Interviewer
Anthropic Interviewer is Anthropic’s Claude-powered research tool for running adaptive interviews at scale, then helping human researchers analyze the resulting transcripts and themes.
Key facts
- Type: AI interview and qualitative research tool
- Maker: Anthropic
- Announced: 2025-12-04
- Initial test: 1,250 interviews with professionals: 1,000 general workforce participants, 125 creatives, and 125 scientists [src-068]
- Workflow: planning, interviewing, and analysis [src-068]
- Availability: public pilot via Claude.ai for eligible Free, Pro, and Max users in the initial study window [src-068]
- Proposed next use: follow-up interviews after Claude personal-guidance conversations to understand real-world outcomes [src-073]
What it does
Anthropic Interviewer creates an interview plan from research goals, conducts real-time adaptive interviews through Claude.ai, and helps summarize transcripts against the original questions. Anthropic also used a separate AI analysis tool to cluster emergent themes and quantify prevalence across interviews [src-068].
The tool extends Anthropic’s Economic Index work beyond what happens inside a Claude conversation. Instead of only analyzing chat logs, it asks people how they used AI outputs, how they felt about the work, and what future role they imagine for AI in their lives [src-068].
Anthropic frames it as both a research method and a feedback mechanism for model development, public policy, and partnerships with creatives, scientists, teachers, and other communities affected by AI [src-068].
In the personal-guidance study, Anthropic points to Interviewer as a way to go beyond transcript analysis. Chat logs show what people asked and how Claude answered, but interviews could reveal whether guidance changed decisions, who users would otherwise have asked, and what happened afterward [src-073].
Related
- See also: Anthropic
- Concepts: AI-Mediated Qualitative Research, Augmentation-Automation Perception Gap, AI-Use Stigma, Creative Control Boundaries, Scientific AI Trust Gap, AI Personal Guidance