2026³â 04¿ù 23ÀÏ ¸ñ¿äÀÏ
 
 
  ÇöÀçÀ§Ä¡ > ´º½ºÁö´åÄÄ > Science & Technology

·£¼¶¿þ¾îºÎÅÍ µÅÁöµµ»ì±îÁö... ³ë·ÃÇØÁø »ç±âÇà°¢

 

Á¤Ä¡

 

°æÁ¦

 

»çȸ

 

»ýȰ

 

¹®È­

 

±¹Á¦

 

°úÇбâ¼ú

 

¿¬¿¹

 

½ºÆ÷Ã÷

 

ÀÚµ¿Â÷

 

ºÎµ¿»ê

 

°æ¿µ

 

¿µ¾÷

 

¹Ìµð¾î

 

½Å»óǰ

 

±³À°

 

ÇÐȸ

 

½Å°£

 

°øÁö»çÇ×

 

Ä®·³

 

Ä·ÆäÀÎ
Çѻ츲 ¡®¿ì¸®´Â ÇѽҸ²¡¯ ½Ò ¼Òºñ Ä·ÆäÀÎ ½Ã...
1000¸¸¿øÂ¥¸® Àΰø¿Í¿ì, °Ç°­º¸Çè Áö¿ø ¡®Æò...
- - - - - - -
 

Liquid AI Releases World¡¯s Fastest and Best-Performing Open-Source Small Foundation Models

Next-generation edge models outperform top global competitors; now available open source on Hugging Face
´º½ºÀÏÀÚ: 2025-07-27

CAMBRIDGE, MASS. -- Liquid AI announced the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI’s first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization - especially in long-context or resource-constrained scenarios.

Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2’s weights can now be downloaded from Hugging Face and are also available through the Liquid Playground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days.

“At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” said Ramin Hasani, co-founder and CEO of Liquid AI. “LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.”

The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China’s leading open-source small language models, including those developed by Alibaba and ByteDance.

In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights:

· LFM2 exhibits 200 percent higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer and every other transformer- and non-transformer-based autoregressive models available to date, on CPU.
· The model not only is the fastest, but also on average performs significantly better than models in each size class on instruction-following and function calling (the main attributes of LLMs in building reliable AI agents). This places LFM2 as the ideal choice of models for local and edge use-cases.
· LFMs built based on this new architecture and the new training infrastructure show 300 percent improvement in training efficiency over the previous versions of LFMs, making them the most cost-efficient way to build capable general-purpose AI systems.

Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035.

Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence.



 Àüü´º½º¸ñ·ÏÀ¸·Î

HyperLight Introduces 400G-per-lane TFLN PICs on its Chiplet¢â Platform for Next-Generation AI Interconnects
Commercial Shipments of 5G Chipsets Exceeded 1,900 Units in Q4, Reflecting Continued Progress Toward Volume Production Ramp
Lenovo Brings Production-Scale AI to Hannover Messe 2026, Delivering Up to 85% Faster Lead Times for Manufacturers
The smarter E Europe: The Special Exhibit Renewables 24/7 Shows How a Renewable Energy Supply Is Possible
Donaldson Launches ArmorSeal¢â, Ushering in a New Era of Air Filtration for On‑Road and Off‑Road Heavy‑Duty Equipment
GCE¢ç Launches Gascontrol.com to Showcase Full Gas Control Portfolio for Specialty, Medical, and Industrial Applications
KIOXIA Hits 4.8B Vector Search DB on Single Server, Achieves 7.8x Faster Index Build Time with GPU Acceleration

 

LG Electronics to Showcase New Dishwasher Lineup at EuroCucina 2026
BlackBerry, JVCKENWOOD and SK Telecom Join Sisvel POS Patent Pool as L...
Kinaxis Advances Large-Scale Supply Chain Optimization with NVIDIA AI
DNA Script Expands Global Access to On-demand DNA Synthesis With Distr...
LG Electronics, Nokia and Huawei Named as Founder Licensors of New Sis...
Byondis to Showcase Novel ADC Platform Data at 2026 American Associati...
Silicon Motion Highlights Enterprise SSD Controllers and PCIe NVMe Boo...

 


°øÁö»çÇ×
´º½ºÁö Áß¹®Ç¥±â´Â À½Â÷ Ç¥±â¹æ½Ä '纽ÞÙó¢ ´Ï¿ì½ÃÁö'
º£³×ÇÁ·Ò º£³×ÀÎÅõ Áß¹® Ç¥±â 宝Ò¬ÜØÙÌ 宝Ò¬银öõ(ÜÄÒ¬ÜØ...
¹Ìµð¾î¾Æ¿ì¾î Mediaour ØÚ体ä²们 ØÚô÷ä²Ùú MO ¿¥¿À ØÚä² ØÚä²
¾Ë¸®À¯ºñ Alliuv ä¹备: ä¹êó备, ¾Ë¶ã Althle ä¹÷åìÌ
¾Ë¸®¾Ë Allial Áß¹® Ç¥±â ä¹××尔 ä¹××ì³
´ºÆÛ½ºÆ® New1st Áß¹® Ç¥±â 纽ììãæ(¹øÃ¼ Òïììãæ), N1 纽1
¿£ÄÚ½º¸ð½º : À̾¾ 'EnCosmos : EC' Áß¹® Ç¥±â ì¤ñµ
¾ÆÀ̵ð¾î·Ð Idearon Áß¹® Ç¥±â ì¤îè论 ì¤îèÖå
¹ÙÀÌ¿ÀÀÌ´Ï Bioini Áß¹® Ç¥±â ù±药研 ù±å·æÚ
¿À½ºÇÁ·Ò Ausfrom 奥ÞÙÜØÙÌ, À£ÇÁ·Ò Welfrom 卫ÜØÙÌ
¿¡³ÊÇÁ·Ò Enerfrom 额ÒöÜØÙÌ ¿¡³ÊÀ¯ºñ Eneruv 额Òöêó备
¾ËÇÁ·Ò Alfrom Áß¹® Ç¥±â ä¹尔ÜØÙÌ ä¹ì³ÜØÙÌ

 

ȸ»ç¼Ò°³ | ÀÎÀçä¿ë | ÀÌ¿ë¾à°ü | °³ÀÎÁ¤º¸Ãë±Þ¹æÄ§ | û¼Ò³âº¸È£Á¤Ã¥ | Ã¥ÀÓÇѰè¿Í ¹ýÀû°íÁö | À̸ÞÀÏÁÖ¼Ò¹«´Ü¼öÁý°ÅºÎ | °í°´¼¾ÅÍ

±â»çÁ¦º¸ À̸ÞÀÏ news@newsji.com, ÀüÈ­ 050 2222 0002, ÆÑ½º 050 2222 0111, ÁÖ¼Ò : ¼­¿ï ±¸·Î±¸ °¡¸¶»ê·Î 27±æ 60 1-37È£

ÀÎÅͳݴº½º¼­ºñ½º»ç¾÷µî·Ï : ¼­¿ï ÀÚ00447, µî·ÏÀÏÀÚ : 2013.12.23., ´º½º¹è¿­ ¹× û¼Ò³âº¸È£ÀÇ Ã¥ÀÓ : ´ëÇ¥ CEO

Copyright ¨Ï All rights reserved..