Liquid
Liquid AI is an AI company founded by MIT researchers, focused on building efficient foundation models designed for edge AI and on-device deployment. The company raised $250M in December 2024 to scale capable and efficient general-purpose AI.
We've redefined what's possible with our proprietary architecture designed for efficiency, speed, and real-world deployment on any device. — Liquid AI
Model Philosophy
Liquid Foundation Models (LFMs) are purpose-built for efficiency, speed, and real-world deployment across diverse hardware:
- Hybrid architecture: Combines multiplicative gates and short convolutions with attention
- Cross-platform: Runs on CPUs, GPUs, and NPUs (phones, laptops, vehicles, wearables)
- Designed for customization: Rapidly fine-tunable for specific use cases
- Privacy-first: Enables fully local, on-device deployment
Models
Text Models
LFM2.5 Series (January 2026)
| Model | Parameters | Active | Context | Description |
|---|---|---|---|---|
| LFM2.5-1.2B-Instruct | 1.2B | 1.2B | 32K | Enhanced instruction following |
| LFM2.5-1.2B-Base | 1.2B | 1.2B | 32K | Base model for fine-tuning |
| LFM2.5-1.2B-Thinking | 1.2B | 1.2B | 32K | On-device reasoning under 1GB |
LFM2 Series (July–October 2025)
| Model | Parameters | Active | Context | Description |
|---|---|---|---|---|
| LFM2-24B-A2B | 24B | 2.3B | 32K | Largest LFM, fits in 32GB RAM |
| LFM2-8B-A1B | 8.3B | 1.5B | 32K | Best on-device MoE for quality and speed |
| LFM2-2.6B | 2.6B | 2.6B | 32K | Dense model with dynamic hybrid reasoning |
| LFM2-1.2B | 1.2B | 1.2B | 32K | Balanced edge deployment |
| LFM2-700M | 700M | 700M | 32K | Compact edge model |
| LFM2-350M | 350M | 350M | 32K | Smallest text model |
Vision-Language Models
LFM2.5 Series (January 2026)
| Model | Parameters | Description |
|---|---|---|
| LFM2.5-VL-1.6B | 1.6B | Enhanced efficient multimodal |
LFM2 Series (August–October 2025)
| Model | Parameters | Description |
|---|---|---|
| LFM2-VL-3B | 3B | Edge vision-language for embedded autonomy |
| LFM2-VL-1.6B | 1.6B | Efficient multimodal |
| LFM2-VL-450M | 450M | Compact vision-language |
Audio Models
LFM2.5 Series (January 2026)
| Model | Parameters | Description |
|---|---|---|
| LFM2.5-Audio-1.5B | 1.5B | Enhanced end-to-end audio foundation model |
LFM2 Series (October 2025)
| Model | Parameters | Description |
|---|---|---|
| LFM2-Audio-1.5B | 1.5B | End-to-end audio foundation model |
Specialized (Nano) Models
| Model | Description |
|---|---|
| LFM2-ColBERT-350M | Universal embedding model |
| Various task-specific | Extract, Tool, Math, RAG, Transcript, Japanese |
Key Capabilities
Architecture Highlights:
- Hybrid model with Gated DeltaNet convolutions + attention
- 3:1 ratio of linear attention to softmax attention blocks
- Mixture-of-Experts (MoE) variants for larger models
- 9 supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, Portuguese
Recommended Use Cases:
- Agentic tool use and function calling
- Offline document summarization and Q&A
- Privacy-preserving customer support
- Local RAG pipelines
- Data extraction and creative writing
- Multi-turn conversations