Mixtral 8x7B Instruct

Mixture-of-experts model with 8x7B architecture.

mixtral-8x7b-instruct-together
STABLEGet Started
32,768 context
Starting at $0.06/M input tokens
Starting at $0.06/M output tokens
Streaming
JSON Output

Select Provider

All Providers for Mixtral 8x7B Instruct

LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.

Together AI
Context: 32.8k
Input
$0.06
/M tokens
Cached
/M tokens
Output
$0.06
/M tokens
Get Started