MiniCPM-V 4.6 1.3B vs Mistral Medium 3.5: Benchmarks, Pricing & Capabilities Compared
TL;DR — MiniCPM-V 4.6 1.3B wins for long-context · Mistral Medium 3.5 wins for reasoning.
MiniCPM-V 4.6 1.3B OpenBMB
- Released
- 2026-05-11
- Context window
- 262K tokens
- Input price
- $0.00 / Mtok
- Output price
- $0.00 / Mtok
Mistral Medium 3.5 Mistral
- Released
- 2026-04-29
- Context window
- 256K tokens
- Input price
- $1.50 / Mtok
- Output price
- $7.50 / Mtok
Benchmark comparison
| Benchmark | MiniCPM-V 4.6 1.3B | Mistral Medium 3.5 |
|---|---|---|
| AA Intelligence Index | 12.7 | 39.2 ✓ |
| GPQA Diamond | 30.5% | 74.8% ✓ |
| HLE | 4.9% | 12.8% ✓ |
| IF-Bench | 26.7% | 68.8% ✓ |
| LiveCodeBench Reasoning | 6.3% | 61.0% ✓ |
| SciCode | 2.1% | 39.6% ✓ |
| TAU2-bench | 87.7% | 94.2% ✓ |
| TerminalBench-Hard | 0.0% | 33.3% ✓ |
Pricing comparison
| Metric | MiniCPM-V 4.6 1.3B | Mistral Medium 3.5 |
|---|---|---|
| Input ($/Mtok) | $0.00 | $1.50 |
| Output ($/Mtok) | $0.00 | $7.50 |
| Cached input ($/Mtok) | — | — |
| Cost per 1M-token roundtrip (1M in + 1M out) | $0.00 | $9.00 |
Context window & modalities
| Attribute | MiniCPM-V 4.6 1.3B | Mistral Medium 3.5 |
|---|---|---|
| Context window | 262K tokens | 256K tokens |
| Input modalities | text, image, video | text, image |
| Output modalities | text | text |
| Knowledge cutoff | — | — |
Verdict by use case
Coding
Insufficient data
Basis: SWE-bench
No shared coding benchmark.
Reasoning
→ Mistral Medium 3.5
Basis: GPQA Diamond
MiniCPM-V 4.6 1.3B 30.5% vs Mistral Medium 3.5 74.8% on GPQA Diamond.
Math
Insufficient data
Basis: MATH / AIME
No shared math benchmark.
Long context
→ MiniCPM-V 4.6 1.3B
Basis: Context window
MiniCPM-V 4.6 1.3B 262K tokens vs Mistral Medium 3.5 256K tokens.
Cost
→ MiniCPM-V 4.6 1.3B
Basis: Input $/Mtok
MiniCPM-V 4.6 1.3B $0/Mtok vs Mistral Medium 3.5 $1.5/Mtok input.
Changelog & releases
MiniCPM-V 4.6 1.3B
Released 2026-05-11
Mistral Medium 3.5
Released 2026-04-29