Claude Sonnet 4.6 vs GLM-5: Benchmarks, Pricing & Capabilities Compared
TL;DR — Claude Sonnet 4.6 wins for reasoning + long-context · GLM-5 wins for cost.
Claude Sonnet 4.6 Anthropic
- Released
- 2026-02-17
- Context window
- 500K tokens
- Input price
- $3.00 / Mtok
- Output price
- $15.00 / Mtok
Key features
- Agent Teams: orchestrate 2-16 Claude instances
- Near-Opus performance at 1/5th cost
- 80.8% SWE-bench Verified
GLM-5 Zhipu AI
- Released
- 2026-02-11
- Context window
- 200K tokens
- Input price
- $0.11 / Mtok
- Output price
- $0.28 / Mtok
Key features
- First frontier model trained on Huawei Ascend chips (no NVIDIA)
- #1 HLE score (50.4%)
- 1.2% hallucination rate via Slime RL
Benchmark comparison
| Benchmark | Claude Sonnet 4.6 | GLM-5 |
|---|---|---|
| MMLU | 92.1% ✓ | 88.7% |
Pricing comparison
| Metric | Claude Sonnet 4.6 | GLM-5 |
|---|---|---|
| Input ($/Mtok) | $3.00 | $0.11 |
| Output ($/Mtok) | $15.00 | $0.28 |
| Cached input ($/Mtok) | $0.30 | — |
| Cost per 1M-token roundtrip (1M in + 1M out) | $18.00 | $0.39 |
Context window & modalities
| Attribute | Claude Sonnet 4.6 | GLM-5 |
|---|---|---|
| Context window | 500K tokens | 200K tokens |
| Input modalities | text, image, PDF | text, image |
| Output modalities | text | text |
| Knowledge cutoff | 2025-10 | 2025-11 |
Verdict by use case
Coding
Insufficient data
Basis: SWE-bench
No shared coding benchmark.
Reasoning
→ Claude Sonnet 4.6
Basis: MMLU-Pro
Claude Sonnet 4.6 92.1% vs GLM-5 88.7% on MMLU-Pro.
Math
Insufficient data
Basis: MATH / AIME
No shared math benchmark.
Long context
→ Claude Sonnet 4.6
Basis: Context window
Claude Sonnet 4.6 500K tokens vs GLM-5 200K tokens.
Cost
→ GLM-5
Basis: Input $/Mtok
Claude Sonnet 4.6 $3/Mtok vs GLM-5 $0.11/Mtok input.
Changelog & releases
Claude Sonnet 4.6
Released 2026-02-17
Predecessor: anthropic-claude-sonnet-4
- Agent Teams: orchestrate 2–16 Claude instances in parallel
- +8.5pt on SWE-bench Verified vs Sonnet 4
- 1/5 the cost of Opus 4.5 at ~95% of coding quality
- Fast mode research preview for lower-latency inference
GLM-5
Released 2026-02-11
- Trained entirely on Huawei Ascend 910B clusters (no NVIDIA)
- Slime RL fine-tuning drops hallucination rate to 1.2%
- 136x cheaper than Claude Opus 4.5 at comparable quality