Economic Primitives
Economic primitives are simple, foundational measurements of AI use that help estimate AI’s economic impact beyond task labels alone.
Key points
- Anthropic’s January 2026 Economic Index adds five primitive categories: task complexity, human and AI skills, use case, AI autonomy, and task success [src-069, src-070].
- The primitives are generated by asking Claude classifier questions about anonymized Claude.ai and first-party API transcripts [src-069, src-070].
- They extend the prior automation/augmentation measurement by distinguishing dimensions that can otherwise be conflated. A directive translation request can be high automation but low autonomy because it requires little decision-making [src-069].
- Anthropic treats the classifiers as directionally accurate rather than exact. Their value is in combined patterns across tasks, regions, occupations, and platforms [src-069].
- The primitives enable richer questions: where AI is reliable, how long real tasks are, what education level prompts and outputs require, and which occupations have effective task coverage [src-069, src-070].
- Anthropic presents primitives as a leading indicator: they can track whether AI use becomes more reliable, more autonomous, more business-critical, or more concentrated in particular occupational tasks over time [src-070].
- The March 2026 report uses primitives longitudinally: Claude.ai prompts became slightly less complex on average, required less estimated human time, and were given more AI autonomy as the user base broadened [src-071].
- Primitives also become inputs into learning-curve analysis: higher-tenure users show higher human-input education levels, less personal use, more collaboration, and higher success [src-071].
Related entities
Related concepts
- Anthropic AI Usage Index
- Real-World AI Task Horizons
- Effective AI Job Coverage
- Task-Level Deskilling and Upskilling
- Augmentation-Automation Perception Gap
- AI Adoption Learning Curves
- API Workflow Migration