Liquid: LFM2-2.6B
Model Type
Open Weight Model
3B parameters
Recommended Use Cases
Try LFM2-2.6B
LFM2-2.6B is Liquid AI's largest dense model, featuring dynamic hybrid reasoning and strong performance across instruction following, math, and multilingual tasks.
LFM2 sets a new standard in terms of quality, speed, and memory efficiency. β Liquid AI
Overview
Released September 23, 2025, LFM2-2.6B is a dense model (all parameters active) that outperforms similarly-sized models like Llama-3.2-3B-Instruct and SmolLM3-3B across multiple benchmarks. It's the only LFM2 model in the dense family with dynamic hybrid reasoning capabilities.
Benchmark Performance
LFM2-2.6B outperforms similar-sized models across knowledge, instruction following, and math:
| Benchmark | LFM2-2.6B | Llama-3.2-3B | SmolLM3-3B | gemma-3-4b-it |
|---|---|---|---|---|
| MMLU | 64.42 | 60.35 | 59.84 | 58.35 |
| IFEval | 79.56 | 71.43 | 72.44 | 76.85 |
| GSM8K | 82.41 | 75.21 | 81.12 | 89.92 |
| MGSM | 74.32 | 61.68 | 68.72 | 87.28 |
| MMMLU | 55.39 | 47.92 | 50.02 | 50.14 |
Recommended Use Cases
- Agentic tasks and function calling
- Data extraction
- RAG pipelines
- Creative writing
- Multi-turn conversations
Not recommended for: Knowledge-intensive tasks or complex programming (fine-tune for these)
When to Use LFM2-2.6B
Choose LFM2-2.6B when you need:
- Dense model simplicity (no MoE routing complexity)
- Dynamic hybrid reasoning for complex prompts
- Broad device compatibility
- Strong multilingual performance
Choose LFM2-8B-A1B when you need:
- Better code and knowledge capabilities
- Faster inference per quality level (MoE efficiency)
Choose LFM2-1.2B or smaller when you need:
- Minimum memory footprint
- Deployment on constrained devices
Hardware Requirements
| Quantization | RAM/VRAM Required |
|---|---|
| Q4_K_M | ~2GB |
| 8-bit | ~3GB |
| BF16 | ~5.5GB |
Runs efficiently on smartphones, tablets, and laptops with CPU, GPU, or NPU.
Role in Series
LFM2 dense text models:
- LFM2-2.6B: Largest dense, dynamic reasoning (this model)
- LFM2-1.2B: Mid-size dense
- LFM2-700M: Compact
- LFM2-350M: Smallest