Mixtral 8x7B Instruct chat

openrouter
mistral
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Capabilities

Context Window 32k tokens
Max Output 16k tokens
Inputs
Outputs

Pricing (per 1M tokens)

Input $0.54
Output $0.54
Cache Read -
Cache Write -

Supported Parameters

frequency_penaltylogit_biasmax_tokensmin_ppresence_penaltyrepetition_penaltyresponse_formatseedstoptemperaturetool_choicetoolstop_ktop_p