Codestral

Code specialist model for low-latency code generation and FIM.

codestral-2508
STABLEGet Started
256,000 context
Starting at $0.30/M input tokens
Starting at $0.90/M output tokens
Streaming
JSON Output

Select Provider

All Providers for Codestral

LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.

Mistral AI
Context: 256k
Input
$0.3
/M tokens
Cached
/M tokens
Output
$0.9
/M tokens
Get Started