AI: LFM2-8B-A1B chat
LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
Capabilities
Context Window 32k tokens
Max Output 0 tokens
Inputs
Outputs
Pricing (per 1M tokens)
Input $0.01
Output $0.02
Cache Read -
Cache Write -
Supported Parameters
frequency_penaltymax_tokensmin_ppresence_penaltyrepetition_penaltyseedstoptemperaturetop_ktop_p