Autohand Code Integration
Use GPT-5, Claude, Gemini, or any model with Autohand Code's autonomous coding agent. Simple config, full cost tracking.
Autohand Code is an autonomous AI coding agent that works in your terminal, IDE, and Slack. With LLM Gateway, you can route all Autohand Code requests through a single gateway—use any of 180+ models from 60+ providers, with full cost tracking and smart routing.
Quick Start
Configure Autohand Code to use LLM Gateway by setting the base URL and API key:
1export OPENAI_BASE_URL=https://api.llmgateway.io/v12export OPENAI_API_KEY=llmgtwy_your_api_key_here
1export OPENAI_BASE_URL=https://api.llmgateway.io/v12export OPENAI_API_KEY=llmgtwy_your_api_key_here
Then start Autohand Code as usual:
1autohand
1autohand
Autohand Code will now route all requests through LLM Gateway.
Configuration File
You can also configure LLM Gateway in Autohand Code's config file. Add or update the provider settings:
1{2 "provider": {3 "llmgateway": {4 "baseUrl": "https://api.llmgateway.io/v1",5 "apiKey": "llmgtwy_your_api_key_here"6 }7 },8 "model": "gpt-5"9}
1{2 "provider": {3 "llmgateway": {4 "baseUrl": "https://api.llmgateway.io/v1",5 "apiKey": "llmgtwy_your_api_key_here"6 }7 },8 "model": "gpt-5"9}
Why Use LLM Gateway with Autohand Code
- 180+ models — GPT-5, Claude Opus, Gemini, Llama, and more from 60+ providers
- Smart routing — Automatically selects the best provider based on uptime, throughput, price, and latency
- Cost tracking — Monitor exactly how much each autonomous session costs
- Single bill — No need to manage multiple API provider accounts
- Response caching — Repeated requests hit cache automatically
- Automatic failover — If one provider is down, requests route to another
Choosing Models
You can use any model from the models page. Popular options for Autohand Code:
| Model | Best For |
|---|---|
gpt-5 | Latest OpenAI flagship, highest quality |
claude-opus-4-6 | Anthropic's most capable model |
claude-sonnet-4-6 | Fast reasoning with extended thinking |
gemini-2.5-pro | Google's latest flagship, 1M context window |
o3 | Advanced reasoning tasks |
gpt-5-mini | Cost-effective, quick responses |
gemini-2.5-flash | Fast responses, good for high-volume |
deepseek-v3.1 | Open-source with tool support |
Autohand Code Features with LLM Gateway
Terminal (CLI)
Autohand Code CLI works seamlessly with LLM Gateway. Set the environment variables and use all Autohand Code commands as normal—multi-file editing, agentic search, and autonomous code generation all work out of the box.
IDE Integration
Autohand Code's VS Code and Zed extensions respect the same environment variables. Set them in your shell profile and the IDE integration will automatically route through LLM Gateway.
Slack Integration
When using Autohand Code through Slack, configure the LLM Gateway base URL in your Autohand Code server settings to route all Slack-triggered coding tasks through the gateway.
Monitoring Usage
Once configured, all Autohand Code requests appear in your LLM Gateway dashboard:
- Request logs — See every prompt and response
- Cost breakdown — Track spending by model and time period
- Usage analytics — Understand your AI usage patterns
Get Started
- Sign up free — no credit card required
- Copy your API key from the dashboard
- Set the environment variables above
- Run
autohandand start coding
Questions? Check our docs or join Discord.