GLM-4.5 Flash

Free, fast GLM-4.5 model.

glm-4.5-flash
UNSTABLEGet Started
128,000 context
Starting at Free input tokens
Starting at Free output tokens
Streaming
Tools
JSON Output

Select Provider

All Providers for GLM-4.5 Flash

LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.

Z AI
Context: 128k
Input
$0
/M tokens
Cached
$0
/M tokens
Output
$0
/M tokens
Get Started