Gemini 3.1 Pro vs GPT-5.2: Benchmarks, Pricing & Capabilities Compared
TL;DR — Gemini 3.1 Pro wins for reasoning + long-context · GPT-5.2 wins for general use.
Gemini 3.1 Pro Google
- Released
- 2026-02-19
- Context window
- 2M tokens
- Input price
- $2.50 / Mtok
- Output price
- $10.00 / Mtok
Key features
- 2x reasoning improvement
- ARC-AGI-2 score of 77.1%
- Enhanced multimodal understanding
GPT-5.2 OpenAI
- Released
- 2025-12-11
- Context window
- 400K tokens
- Input price
- $2.00 / Mtok
- Output price
- $10.00 / Mtok
Key features
- Enhanced reasoning capabilities
- Improved adaptive reasoning
- Better multimodal understanding
Benchmark comparison
| Benchmark | Gemini 3.1 Pro | GPT-5.2 |
|---|---|---|
| GPQA Diamond | 84.2% ✓ | 80.1% |
| MMLU-Pro | 93.8% ✓ | 90.8% |
| SWE-bench Verified | 72.3% | 72.5% ✓ |
Pricing comparison
| Metric | Gemini 3.1 Pro | GPT-5.2 |
|---|---|---|
| Input ($/Mtok) | $2.50 | $2.00 |
| Output ($/Mtok) | $10.00 | $10.00 |
| Cached input ($/Mtok) | $0.25 | — |
| Cost per 1M-token roundtrip (1M in + 1M out) | $12.50 | $12.00 |
Context window & modalities
| Attribute | Gemini 3.1 Pro | GPT-5.2 |
|---|---|---|
| Context window | 2M tokens | 400K tokens |
| Input modalities | text, image, audio, video, PDF | text, image, audio |
| Output modalities | text | text, audio |
| Knowledge cutoff | 2025-12 | 2025-08 |
Verdict by use case
Coding
→ GPT-5.2
Basis: SWE-bench
Gemini 3.1 Pro 72.3% vs GPT-5.2 72.5% on SWE-bench.
Reasoning
→ Gemini 3.1 Pro
Basis: GPQA Diamond
Gemini 3.1 Pro 84.2% vs GPT-5.2 80.1% on GPQA Diamond.
Math
Insufficient data
Basis: MATH / AIME
No shared math benchmark.
Long context
→ Gemini 3.1 Pro
Basis: Context window
Gemini 3.1 Pro 2M tokens vs GPT-5.2 400K tokens.
Cost
→ GPT-5.2
Basis: Input $/Mtok
Gemini 3.1 Pro $2.5/Mtok vs GPT-5.2 $2/Mtok input.
Changelog & releases
Gemini 3.1 Pro
Released 2026-02-19
Predecessor: google-gemini-3-pro
- 2x reasoning score on ARC-AGI-2 vs Gemini 3 Pro
- Context window expanded to 2M tokens
- Deep Think mode enabled by default on the Pro tier
- Lower latency on first-token despite larger context