Mistral 7B Instruct v0.1 chat
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
Capabilities
Context Window 2k tokens
Max Output 0 tokens
Inputs
Outputs
Pricing (per 1M tokens)
Input $0.11
Output $0.19
Cache Read -
Cache Write -
Supported Parameters
frequency_penaltymax_tokenspresence_penaltyrepetition_penaltyseedtemperaturetop_ktop_p