Jagged Intelligence

Jagged Intelligence

Jagged intelligence describes the uneven capability profile of LLMs: they can perform extremely difficult tasks in some trained/verifiable circuits while failing simple common-sense cases outside those circuits.

Key points

  • Karpathy uses the car-wash example: a model that can refactor a huge codebase or find vulnerabilities may still advise walking to a car wash 50 meters away to wash a car [src-055].
  • The jaggedness comes from a combination of pretraining statistics, RL reward environments, and what labs choose to include in their data distribution [src-055].
  • This is why users must treat models as powerful tools rather than uniformly intelligent agents. They need exploration, oversight, and domain testing to discover where the model flies and where it struggles [src-055].
  • Karpathy’s “animals versus ghosts” framing emphasizes that LLMs are not animal intelligences with intrinsic motivation; they are statistical simulation circuits shaped by pretraining and RL [src-055].
  • Jaggedness is closely related to the Verifiability Frontier: capabilities peak where verification rewards exist and degrade where they do not [src-055].

Related entities

Related concepts

Source references

  • [src-055] Sequoia Capital — “Andrej Karpathy: From Vibe Coding to Agentic Engineering” (2026-04-29)