Claude Sonnet 4.6 vs GPT-5.2 Codex: Benchmarks, Pricing & Capabilities Compared
TL;DR — Claude Sonnet 4.6 wins for long-context · GPT-5.2 Codex wins for cost.
Claude Sonnet 4.6 Anthropic
- Released
- 2026-02-17
- Context window
- 500K tokens
- Input price
- $3.00 / Mtok
- Output price
- $15.00 / Mtok
Key features
- Agent Teams: orchestrate 2-16 Claude instances
- Near-Opus performance at 1/5th cost
- 80.8% SWE-bench Verified
GPT-5.2 Codex OpenAI
- Released
- 2025-12-18
- Context window
- 256K tokens
- Input price
- $1.50 / Mtok
- Output price
- $12.00 / Mtok
Key features
- Specialized for software engineering
- Enhanced agentic coding
- Multi-file refactoring
Benchmark comparison
| Benchmark | Claude Sonnet 4.6 | GPT-5.2 Codex |
|---|---|---|
| HumanEval | 95.2% ✓ | 95.1% |
| SWE-bench Verified | 80.8% ✓ | 78.2% |
Pricing comparison
| Metric | Claude Sonnet 4.6 | GPT-5.2 Codex |
|---|---|---|
| Input ($/Mtok) | $3.00 | $1.50 |
| Output ($/Mtok) | $15.00 | $12.00 |
| Cached input ($/Mtok) | $0.30 | — |
| Cost per 1M-token roundtrip (1M in + 1M out) | $18.00 | $13.50 |
Context window & modalities
| Attribute | Claude Sonnet 4.6 | GPT-5.2 Codex |
|---|---|---|
| Context window | 500K tokens | 256K tokens |
| Input modalities | text, image, PDF | text, image |
| Output modalities | text | text |
| Knowledge cutoff | 2025-10 | 2025-08 |
Verdict by use case
Coding
→ Claude Sonnet 4.6
Basis: SWE-bench
Claude Sonnet 4.6 80.8% vs GPT-5.2 Codex 78.2% on SWE-bench.
Reasoning
Insufficient data
Basis: GPQA / MMLU
No shared reasoning benchmark.
Math
Insufficient data
Basis: MATH / AIME
No shared math benchmark.
Long context
→ Claude Sonnet 4.6
Basis: Context window
Claude Sonnet 4.6 500K tokens vs GPT-5.2 Codex 256K tokens.
Cost
→ GPT-5.2 Codex
Basis: Input $/Mtok
Claude Sonnet 4.6 $3/Mtok vs GPT-5.2 Codex $1.5/Mtok input.
Changelog & releases
Claude Sonnet 4.6
Released 2026-02-17
Predecessor: anthropic-claude-sonnet-4
- Agent Teams: orchestrate 2–16 Claude instances in parallel
- +8.5pt on SWE-bench Verified vs Sonnet 4
- 1/5 the cost of Opus 4.5 at ~95% of coding quality
- Fast mode research preview for lower-latency inference