Liquid

Liquid AI is an AI company founded by MIT researchers, focused on building efficient foundation models designed for edge AI and on-device deployment. The company raised $250M in December 2024 to scale capable and efficient general-purpose AI.

We've redefined what's possible with our proprietary architecture designed for efficiency, speed, and real-world deployment on any device. — Liquid AI

Model Philosophy

Liquid Foundation Models (LFMs) are purpose-built for efficiency, speed, and real-world deployment across diverse hardware:

  • Hybrid architecture: Combines multiplicative gates and short convolutions with attention
  • Cross-platform: Runs on CPUs, GPUs, and NPUs (phones, laptops, vehicles, wearables)
  • Designed for customization: Rapidly fine-tunable for specific use cases
  • Privacy-first: Enables fully local, on-device deployment

Models

Text Models

LFM2.5 Series (January 2026)

ModelParametersActiveContextDescription
LFM2.5-1.2B-Instruct1.2B1.2B32KEnhanced instruction following
LFM2.5-1.2B-Base1.2B1.2B32KBase model for fine-tuning
LFM2.5-1.2B-Thinking1.2B1.2B32KOn-device reasoning under 1GB

LFM2 Series (July–October 2025)

ModelParametersActiveContextDescription
LFM2-24B-A2B24B2.3B32KLargest LFM, fits in 32GB RAM
LFM2-8B-A1B8.3B1.5B32KBest on-device MoE for quality and speed
LFM2-2.6B2.6B2.6B32KDense model with dynamic hybrid reasoning
LFM2-1.2B1.2B1.2B32KBalanced edge deployment
LFM2-700M700M700M32KCompact edge model
LFM2-350M350M350M32KSmallest text model

Vision-Language Models

LFM2.5 Series (January 2026)

ModelParametersDescription
LFM2.5-VL-1.6B1.6BEnhanced efficient multimodal

LFM2 Series (August–October 2025)

ModelParametersDescription
LFM2-VL-3B3BEdge vision-language for embedded autonomy
LFM2-VL-1.6B1.6BEfficient multimodal
LFM2-VL-450M450MCompact vision-language

Audio Models

LFM2.5 Series (January 2026)

ModelParametersDescription
LFM2.5-Audio-1.5B1.5BEnhanced end-to-end audio foundation model

LFM2 Series (October 2025)

ModelParametersDescription
LFM2-Audio-1.5B1.5BEnd-to-end audio foundation model

Specialized (Nano) Models

ModelDescription
LFM2-ColBERT-350MUniversal embedding model
Various task-specificExtract, Tool, Math, RAG, Transcript, Japanese

Key Capabilities

Architecture Highlights:

  • Hybrid model with Gated DeltaNet convolutions + attention
  • 3:1 ratio of linear attention to softmax attention blocks
  • Mixture-of-Experts (MoE) variants for larger models
  • 9 supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish, Portuguese

Recommended Use Cases:

  • Agentic tool use and function calling
  • Offline document summarization and Q&A
  • Privacy-preserving customer support
  • Local RAG pipelines
  • Data extraction and creative writing
  • Multi-turn conversations

Links

Models in this family

Feb 25, 2026
Oct 20, 2025
Oct 20, 2025