GLM-4.7 Flash

Free, lightweight GLM-4.7 model.

glm-4.7-flash
UNSTABLEGet Started
200,000 context
Starting at $0.06/M input tokens
Starting at $0.40/M output tokens
Streaming
Tools
Reasoning
JSON Output

Select Provider

All Providers for GLM-4.7 Flash

LLM Gateway routes requests to the best providers that are able to handle your prompt size and parameters.

Z AI
Context: 200k
Input
$0
/M tokens
Cached
$0
/M tokens
Output
$0
/M tokens
Get Started
EmberCloud
Context: 200k
Input
$0.06
/M tokens
Cached
$0.01
/M tokens
Output
$0.4
/M tokens
Get Started