Auto Router coding

openrouter
Your prompt will be processed by a meta-model and routed to one of dozens of models (see below), optimizing for the best possible output. To see which model was used, visit Activity, or read the `model` attribute of the response. Your response will be priced at the same rate as the routed model. Learn more, including how to customize the models for routing, in our docs. Requests will be routed to the following models: - anthropic/claude-haiku-4.5 - anthropic/claude-opus-4.6 - anthropic/claude-sonnet-4.5 - anthropic/claude-sonnet-4.6 - deepseek/deepseek-r1 - google/gemini-2.5-flash-lite - google/gemini-3-flash-preview - google/gemini-3-pro-preview - google/gemini-3.1-pro-preview - meta-llama/llama-3.3-70b-instruct - minimax/minimax-m2.5 - mistralai/codestral-2508 - mistralai/mistral-7b-instruct-v0.1 - mistralai/mistral-large - mistralai/mistral-medium-3.1 - mistralai/mistral-small-3.2-24b-instruct-2506 - moonshotai/kimi-k2-thinking - moonshotai/kimi-k2.5 - openai/gpt-5 - openai/gpt-5-mini - openai/gpt-5-nano - openai/gpt-5.1 - openai/gpt-5.2 - openai/gpt-5.2-pro - openai/gpt-5.3-chat - openai/gpt-oss-120b - perplexity/sonar - qwen/qwen3-235b-a22b - x-ai/grok-3 - x-ai/grok-3-mini - x-ai/grok-4 - x-ai/grok-4.1-fast - z-ai/glm-5

Capabilities

Context Window 2M tokens
Max Output 0 tokens
Inputs
Outputs

Pricing (per 1M tokens)

Input $-
Output $-
Cache Read -
Cache Write -

Supported Parameters

frequency_penaltyinclude_reasoninglogit_biaslogprobsmax_tokensmin_ppresence_penaltyreasoningreasoning_effortrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_logprobstop_pweb_search_options