Context Engineering

Context Engineering

The discipline of feeding an AI agent or workflow the information it needs at the moment it needs it – distinct from prompt engineering (which focuses only on instructions). Analogy: a system prompt is like studying the night before an exam; good context is like having a cheat sheet during the exam. Both matter, but context is the bigger lever because LLMs don't know your business, clients, or internal processes until you give them that context.

Key points

  • System prompt = rules, tone, structure, base knowledge (studying for the exam)
  • Context = exact details at the exact moment needed (cheat sheet during the exam)
  • LLMs are not mind-readers – they only know what you feed them
  • Context rot: long agent sessions degrade quality as context window fills up
  • Mitigation: shorter sessions, project summaries, compacting, plan-mode gatekeeping
  • Applies equally to n8n AI agents and Claude Code projects
  • Datadog reframes the production bottleneck as context quality rather than context volume: long context windows are now large enough for most uses, but noisy, redundant, or poorly ordered context can bury the details that matter [src-037].
  • Production context engineering includes retrieval quality, summarization, deduplication, compression, and information hierarchy so agents receive high-signal inputs [src-037].
  • Prompt layout also matters for cost: stable system instructions, policies, and tool schemas should be placed where providers can reuse cached prefixes, while dynamic state should come later [src-037].
  • Google Cloud warns that stuffing all organizational policy into every agent context creates cognitive burden; governance should be enforced by environment guardrails where possible so the agent can focus on the task [src-043].
  • Preston Holmes introduces Context Sharding as the multi-agent complement: split a problem across role-specific context windows when one context cannot carry the whole problem cleanly [src-043].
  • Next '26 adds enterprise context products: Projects confine agent memory to selected files/conversations, Workspace Intelligence grounds agents in workflow context, and Knowledge Catalog maps data semantics for agent grounding [src-044].
  • The AI Engineer corpus broadens context engineering into a portfolio: RAG, GraphRAG, knowledge graphs, expert indexes, memory, context windows, search, embeddings, hybrid retrieval, and domain-specific knowledge apps all appear as ways to build better model inputs [src-077].
  • The repeated lesson is that context is not "more text." Production systems need retrieval evaluation, information hierarchy, freshness, permissions, cost awareness, and observability around which context was supplied and why [src-077].

Related entities

Related concepts

Source references

  • [src-005] Nate Herk cluster — Nate Herk — n8n cluster (18 videos)

– Videos referenced: Fqeo8q8-nJg, ZeJXI2MAhj0, 3GAxd90fEE4

  • [src-037] Datadog — "State of AI Engineering" (2026-04-21)
  • [src-043] Google Cloud Events — "Operationalize AI: A blueprint for managing enterprise agents at scale" (2026-04-24)
  • [src-044] Thomas Kurian — "Welcome to Google Cloud Next '26" (2026-04-22)
  • [src-077] AI Engineer channel transcript cluster (678 saved transcripts, 2023-10-20 to 2026-05-15)