GPT 5.5
OpenAI’s flagship model released April 2026 (codenamed “Spud”). Positioned as “more with less” — fewer output tokens per task, faster autonomous decomposition of vague prompts.
Key facts
- Type: Foundation language model
- Maker: OpenAI
- Released: April 2026
- Codename: Spud
- Pricing: $5/M input tokens, $30/M output tokens (double GPT 5.4; marginally more expensive than Opus 4.7 on output)
- Context window in Codex: 400K tokens (vs Opus 4.7’s 1M)
- Availability: ChatGPT, Codex; direct API coming soon at launch
- Codex role: One of the selectable Codex models in Nate’s May 2026 Codex app walkthrough, alongside GPT 5.4 and other model choices [src-048]
- Reasoning controls: Codex exposes speed/intelligence settings such as low, medium, high, and extra-high reasoning in the app interface [src-048]
- Roberts’s framing: GPT 5.5 is presented as the strongest Codex model option and paired with a speed/performance toggle that trades usage for faster execution [src-058].
Benchmark comparison vs Claude Opus 4.7 (Nate’s 4-task coding benchmark)
| Metric | GPT 5.5 | Opus 4.7 |
|---|---|---|
| Total runtime (4 tasks) | 20m 49s | 40m 43s |
| Output tokens | ~70K | ~250K |
| Relative cost | ~$3 cheaper | baseline |
| Terminal Bench 2.0 | 82.7 | 69.4 |
| SWE-bench Verified | lower | higher |
Key finding: GPT 5.5 uses ~3.5x fewer output tokens for equivalent outputs — the “more with less” positioning is measurable. [src-012]
Related
- See also: OpenAI, Codex (OpenAI)
- Compared against: Claude Opus 4.7
- Concepts: Practitioner Model Benchmarking Methodology, Claude Code Token Economics