N
Novita
AGGREGATEDINFERENCE
N/A
Uptime
N/A
Rating
30-Day Uptime
96.7%2026-03-232026-04-21
Inference Latency
Meta: Llama 3.1 8B Instruct641ms TTFT · 31 TPS
Meta: Llama 3 8B Instruct500ms TTFT · 25 TPS
Mistral: Mistral Nemo1044ms TTFT · 22 TPS
OpenAI: gpt-oss-120b (exacto)633ms TTFT · 51 TPS
OpenAI: gpt-oss-20b676ms TTFT · 89 TPS
OpenAI: gpt-oss-120b625ms TTFT · 21 TPS
Sao10K: Llama 3 8B Lunaris3864ms TTFT · 23 TPS
Z.ai: GLM 4.7 Flash1159ms TTFT · 58 TPS
Baidu: ERNIE 4.5 21B A3B1473ms TTFT · 67 TPS
Qwen: Qwen3 Coder 30B A3B Instruct1282ms TTFT · 58 TPS
Qwen: Qwen3 VL 8B Instruct740ms TTFT · 44 TPS
Qwen: Qwen3 235B A22B Instruct 25071271ms TTFT · 19 TPS
Qwen: Qwen3 30B A3B921ms TTFT · 43 TPS
Qwen: Qwen3 32B477ms TTFT · 50 TPS
Xiaomi: MiMo-V2-Flash1772ms TTFT · 14 TPS
Google: Gemma 3 27B998ms TTFT · 28 TPS
Google: Gemma 4 26B A4B 1063ms TTFT · 36 TPS
Z.ai: GLM 4.5 Air762ms TTFT · 50 TPS
Meta: Llama 3.3 70B Instruct684ms TTFT · 24 TPS
Google: Gemma 4 31B787ms TTFT · 5 TPS
Inference Models
| Model | Input $/M | Output $/M | TTFT | TPS |
|---|---|---|---|---|
| Meta: Llama 3.1 8B Instruct | $0.02 | $0.05 | 641ms | 31 |
| Meta: Llama 3 8B Instruct | $0.04 | $0.04 | 500ms | 25 |
| Mistral: Mistral Nemo | $0.04 | $0.17 | 1044ms | 22 |
| OpenAI: gpt-oss-120b (exacto) | $0.04 | $0.20 | 633ms | 51 |
| OpenAI: gpt-oss-20b | $0.04 | $0.15 | 676ms | 89 |
| OpenAI: gpt-oss-120b | $0.05 | $0.25 | 625ms | 21 |
| Sao10K: Llama 3 8B Lunaris | $0.05 | $0.05 | 3864ms | 23 |
| Baidu: ERNIE 4.5 21B A3B Thinking | $0.07 | $0.28 | — | — |
| Z.ai: GLM 4.7 Flash | $0.07 | $0.40 | 1159ms | 58 |
| Baidu: ERNIE 4.5 21B A3B | $0.07 | $0.28 | 1473ms | 67 |
| Qwen: Qwen3 Coder 30B A3B Instruct | $0.07 | $0.27 | 1282ms | 58 |
| Qwen: Qwen3 VL 8B Instruct | $0.08 | $0.50 | 740ms | 44 |
| Qwen: Qwen3 235B A22B Instruct 2507 | $0.09 | $0.58 | 1271ms | 19 |
| Qwen: Qwen3 30B A3B | $0.09 | $0.45 | 921ms | 43 |
| Qwen: Qwen3 32B | $0.10 | $0.45 | 477ms | 50 |
| Xiaomi: MiMo-V2-Flash | $0.10 | $0.30 | 1772ms | 14 |
| Google: Gemma 3 27B | $0.12 | $0.20 | 998ms | 28 |
| Google: Gemma 4 26B A4B | $0.13 | $0.40 | 1063ms | 36 |
| Z.ai: GLM 4.5 Air | $0.13 | $0.85 | 762ms | 50 |
| Meta: Llama 3.3 70B Instruct | $0.14 | $0.40 | 684ms | 24 |
| Baidu: ERNIE 4.5 VL 28B A3B | $0.14 | $0.56 | — | — |
| NousResearch: Hermes 2 Pro - Llama-3 8B | $0.14 | $0.14 | — | — |
| Google: Gemma 4 31B | $0.14 | $0.40 | 787ms | 5 |
| Qwen: Qwen3 Next 80B A3B Thinking | $0.15 | $1.50 | 938ms | 157 |
| Qwen: Qwen3 Next 80B A3B Instruct | $0.15 | $1.50 | 723ms | 3 |
| Meta: Llama 4 Scout | $0.18 | $0.59 | 469ms | 34 |
| Qwen: Qwen3 VL 30B A3B Thinking | $0.20 | $1.00 | 2930ms | 68 |
| Qwen: Qwen3 Coder Next | $0.20 | $1.50 | 1048ms | 97 |
| Qwen: Qwen3 VL 30B A3B Instruct | $0.20 | $0.70 | 2254ms | 24 |
| Kwaipilot: KAT-Coder-Pro V1 | $0.21 | $0.83 | 1788ms | 56 |
| DeepSeek: DeepSeek V3.1 Terminus (exacto) | $0.22 | $0.80 | 2354ms | 26 |
| DeepSeek: DeepSeek V3.2 | $0.27 | $0.40 | 1523ms | 24 |
| Meta: Llama 4 Maverick | $0.27 | $0.85 | 598ms | 18 |
| DeepSeek: DeepSeek V3 0324 | $0.27 | $1.12 | 1195ms | 27 |
| DeepSeek: DeepSeek V3.1 | $0.27 | $1.00 | 1882ms | 22 |
| DeepSeek: DeepSeek V3.2 Exp | $0.27 | $0.41 | 1302ms | 13 |
| DeepSeek: DeepSeek V3.1 Terminus | $0.27 | $1.00 | 1589ms | 30 |
| Baidu: ERNIE 4.5 300B A47B | $0.28 | $1.10 | 3963ms | 27 |
| Z.ai: GLM 4.6V | $0.30 | $0.90 | 1328ms | 20 |
| Qwen: Qwen3 VL 235B A22B Instruct | $0.30 | $1.50 | 3434ms | 10 |
| MiniMax: MiniMax M2.1 | $0.30 | $1.20 | 2205ms | 27 |
| MiniMax: MiniMax M2.5 | $0.30 | $1.20 | 3208ms | 29 |
| Qwen: Qwen3 Coder 480B A35B | $0.30 | $1.30 | 931ms | 2 |
| MiniMax: MiniMax M2 | $0.30 | $1.20 | — | — |
| Qwen: Qwen3.5-27B | $0.30 | $2.40 | 708ms | 8 |
| Qwen: Qwen3 235B A22B Thinking 2507 | $0.30 | $3.00 | 726ms | 29 |
| Qwen2.5 72B Instruct | $0.38 | $0.40 | — | — |
| Qwen: Qwen3.5-122B-A10B | $0.40 | $3.20 | 694ms | 12 |
| DeepSeek: DeepSeek V3 | $0.40 | $1.30 | 1303ms | 21 |
| Baidu: ERNIE 4.5 VL 424B A47B | $0.42 | $1.25 | 1099ms | 9 |
| MiniMax: MiniMax M1 | $0.44 | $1.76 | 3343ms | 9 |
| Z.ai: GLM 4.6 (exacto) | $0.44 | $1.76 | 834ms | 119 |
| Meta: Llama 3 70B Instruct | $0.51 | $0.74 | 679ms | 16 |
| Z.ai: GLM 4.7 | $0.54 | $1.98 | 1452ms | 35 |
| Z.ai: GLM 4.6 | $0.55 | $2.20 | 1544ms | 34 |
| MoonshotAI: Kimi K2.5 | $0.57 | $2.85 | 6675ms | 22 |
| MoonshotAI: Kimi K2 0711 | $0.57 | $2.30 | 1069ms | 15 |
| Z.ai: GLM 4.5V | $0.60 | $1.80 | 1248ms | 52 |
| Qwen: Qwen3.5 397B A17B | $0.60 | $3.60 | 1277ms | 51 |
| Z.ai: GLM 4.5 | $0.60 | $2.20 | 709ms | 43 |
| MoonshotAI: Kimi K2 Thinking | $0.60 | $2.50 | 1400ms | 18 |
| MoonshotAI: Kimi K2 0905 | $0.60 | $2.50 | 2134ms | 13 |
| WizardLM-2 8x22B | $0.62 | $0.62 | 1208ms | 8 |
| DeepSeek: R1 | $0.70 | $2.50 | 1604ms | 31 |
| DeepSeek: R1 0528 | $0.70 | $2.50 | — | — |
| DeepSeek: R1 Distill Llama 70B | $0.80 | $0.80 | 1315ms | 35 |
| Qwen: Qwen2.5 VL 72B Instruct | $0.80 | $0.80 | 2369ms | 20 |
| MoonshotAI: Kimi K2.6 | $0.95 | $4.00 | 3058ms | 28 |
| Qwen: Qwen3 VL 235B A22B Thinking | $0.98 | $3.95 | — | — |
| Z.ai: GLM 5 | $1.00 | $3.20 | 1754ms | 29 |
| Z.ai: GLM 5.1 | $1.40 | $4.40 | 1976ms | 31 |
| Sao10k: Llama 3 Euryale 70B v2.1 | $1.48 | $1.48 | — | — |
| Sao10K: Llama 3.1 Euryale 70B v2.2 | $1.48 | $1.48 | 618ms | 36 |
Community Reviews
4.5★★★★★(2 reviews)
clouduser42
★★★★★2025-06-15
Reliable service, great API documentation.
mlresearcher
★★★★☆2025-06-10
Good performance but support could be faster.