Gemma 4 26B A4B reasoning

openrouter
google
Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at a fraction of the compute cost. Supports multimodal input including text, images, and video (up to 60s at 1fps). Features a 256K token context window, native function calling, configurable thinking/reasoning mode, and structured output support. Released under Apache 2.0.

Capabilities

Context Window 262k tokens
Max Output 262k tokens
Inputs
Outputs

Pricing (per 1M tokens)

Input $0.13
Output $0.40
Cache Read -
Cache Write -

Supported Parameters

frequency_penaltyinclude_reasoninglogit_biasmax_tokenspresence_penaltyreasoningrepetition_penaltyresponse_formatseedstopstructured_outputstemperaturetool_choicetoolstop_ktop_p