Llama 4 Maverick chat

openrouter
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...

Capabilities

Context Window 1M tokens
Max Output 16k tokens
Inputs
Outputs

Pricing (per 1M tokens)

Input $0.15
Output $0.60
Cache Read -
Cache Write -

Supported Parameters

frequency_penaltylogit_biasmax_tokensmin_ppresence_penaltyrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetop_ktop_p