Anthropic Economic Index
Anthropic Economic Index is Anthropic’s privacy-preserving research program for measuring how Claude is used across tasks, geographies, platforms, and economic contexts.
Key facts
- Type: Economic research framework and data release
- Maker: Anthropic
- January 2026 report: “Economic primitives”, based on November 2025 Claude usage before Opus 4.5 [src-069, src-070]
- Data scope: 1M Claude.ai conversations plus 1M first-party API prompt-response records from November 13-20, 2025 [src-069]
- March 2026 report: “Learning curves”, based on February 5-12, 2026 Claude usage after Opus 4.5 and around Opus 4.6 [src-071]
- Core purpose: Measure how AI is used, not only how much it is used [src-069, src-070]
What it does
The Index maps anonymized Claude usage into O*NET tasks, collaboration modes, geographies, and economic primitives. The January 2026 report extends prior work by measuring task complexity, human and AI skill levels, use case, AI autonomy, and task success [src-069, src-070].
The program produces public data for researchers, journalists, and policymakers. Its central claim is that AI’s labor-market impact depends on what tasks are used, how reliable the AI is, how much autonomy users delegate, and whether covered tasks are bottlenecks, substitutes, or complements inside jobs [src-069].
The public article version emphasizes the explanatory layer of the report: economic primitives turn Claude usage into reusable measurements for speedup, task horizons, job coverage, deskilling/upskilling, and reliability-adjusted productivity estimates [src-070].
The March 2026 report adds longitudinal evidence: Claude.ai usage diversified, coding work continued shifting into API workflows, US geographic adoption kept converging more slowly than expected, global usage became more concentrated, and experienced users appeared to get better outcomes through more collaborative, work-oriented usage [src-071].
Related
- See also: Anthropic
- Concepts: Economic Primitives, Anthropic AI Usage Index, Real-World AI Task Horizons, Effective AI Job Coverage, Task-Level Deskilling and Upskilling, Augmentation-Automation Perception Gap, AI Adoption Learning Curves, AI Model Selection Economics, API Workflow Migration