2025³â 12¿ù 19ÀÏ ±Ý¿äÀÏ
 
 
  ÇöÀçÀ§Ä¡ > ´º½ºÁö´åÄÄ > Science & Technology

·£¼¶¿þ¾îºÎÅÍ µÅÁöµµ»ì±îÁö... ³ë·ÃÇØÁø »ç±âÇà°¢

 

Á¤Ä¡

 

°æÁ¦

 

»çȸ

 

»ýȰ

 

¹®È­

 

±¹Á¦

 

°úÇбâ¼ú

 

¿¬¿¹

 

½ºÆ÷Ã÷

 

ÀÚµ¿Â÷

 

ºÎµ¿»ê

 

°æ¿µ

 

¿µ¾÷

 

¹Ìµð¾î

 

½Å»óǰ

 

±³À°

 

ÇÐȸ

 

½Å°£

 

°øÁö»çÇ×

 

Ä®·³

 

Ä·ÆäÀÎ
Çѻ츲 ¡®¿ì¸®´Â ÇѽҸ²¡¯ ½Ò ¼Òºñ Ä·ÆäÀÎ ½Ã...
1000¸¸¿øÂ¥¸® Àΰø¿Í¿ì, °Ç°­º¸Çè Áö¿ø ¡®Æò...
- - - - - - -
 

Liquid AI Releases World¡¯s Fastest and Best-Performing Open-Source Small Foundation Models

Next-generation edge models outperform top global competitors; now available open source on Hugging Face
´º½ºÀÏÀÚ: 2025-07-27

CAMBRIDGE, MASS. -- Liquid AI announced the launch of its next-generation Liquid Foundation Models (LFM2), which set new records in speed, energy efficiency, and quality in the edge model class. This release builds on Liquid AI’s first-principles approach to model design. Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference and better generalization - especially in long-context or resource-constrained scenarios.

Liquid AI open-sourced its LFM2, introducing the novel architecture in full transparency to the world. LFM2’s weights can now be downloaded from Hugging Face and are also available through the Liquid Playground for testing. Liquid AI also announced that the models will be integrated into its Edge AI platform and an iOS-native consumer app for testing in the following days.

“At Liquid, we build best-in-class foundation models with quality, latency, and memory efficiency in mind,” said Ramin Hasani, co-founder and CEO of Liquid AI. “LFM2 series of models is designed, developed, and optimized for on-device deployment on any processor, truly unlocking the applications of generative and agentic AI on the edge. LFM2 is the first in the series of powerful models we will be releasing in the coming months.”

The release of LFM2 marks a milestone in global AI competition and is the first time a U.S. company has publicly demonstrated clear efficiency and quality gains over China’s leading open-source small language models, including those developed by Alibaba and ByteDance.

In head-to-head evaluations, LFM2 models outperform state-of-the-art competitors across speed, latency and instruction-following benchmarks. Key highlights:

· LFM2 exhibits 200 percent higher throughput and lower latency compared to Qwen3, Gemma 3n Matformer and every other transformer- and non-transformer-based autoregressive models available to date, on CPU.
· The model not only is the fastest, but also on average performs significantly better than models in each size class on instruction-following and function calling (the main attributes of LLMs in building reliable AI agents). This places LFM2 as the ideal choice of models for local and edge use-cases.
· LFMs built based on this new architecture and the new training infrastructure show 300 percent improvement in training efficiency over the previous versions of LFMs, making them the most cost-efficient way to build capable general-purpose AI systems.

Shifting large generative models from distant clouds to lean, on‑device LLMs unlocks millisecond latency, offline resilience, and data‑sovereign privacy. These are capabilities essential for phones, laptops, cars, robots, wearables, satellites, and other endpoints that must reason in real time. Aggregating high‑growth verticals such as edge AI stack in consumer electronics, robotics, smart appliances, finance, e-commerce, and education, before counting defense, space, and cybersecurity allocations, pushes the TAM for compact, private foundation models toward the $1 trillion mark by 2035.

Liquid AI is engaged with a large number of Fortune 500 companies in these sectors. They offer ultra‑efficient small multimodal foundation models with a secure enterprise-grade deployment stack that turns every device into an AI device, locally. This gives Liquid AI the opportunity to obtain an outsized share on the market as enterprises pivot from cloud LLMs to cost-efficient, fast, private and on‑prem intelligence.



 Àüü´º½º¸ñ·ÏÀ¸·Î

Riskified Champions Fraud Prevention as a Leading Partner of International Fraud Awareness Week 2025
Wasabi Introduces High-Performance Class and Expands into Silicon Valley to Take on AI¡¯s Rising Data Storage Costs
BeOne Medicines Showcases Leadership in B-cell Malignancies at ASH 2025
LG Business Monitors: Industry-Specific Solutions for Today¡¯s Evolving Work Environment
World No.1 Achievement: AGENCIA¡¯s ¡°360¡ÆCar¢ç¡± Certified by GUINNESS WORLD RECORDS¢â
ASDS 2025: New Data on Restylane, Sculptra, Relfydess Show Galderma¡¯s Innovative Aesthetics Portfolio
Access Advance Welcomes Xiaomi to HEVC Advance and VVC Advance Patent Pools as a Licensor and Licensee

 

Swiss-Based Debiopharm Seeks to Boost Innovation in Japanese Cancer Re...
VeriSilicon and Google Jointly Launch Open-Source Coral NPU IP
Ananda Scientific and Benta Sign Memorandum on Advancing Treatments fo...
Third Rock Ventures Startup Picks Parse Biosciences GigaLab to Build L...
Mouser Electronics Explores the Future of Advanced Air Mobility and It...
Ran Kurup Joins MinIO to Accelerate Corporate Strategic Growth
Xsolla and Airbridge Partner to Deliver Unified Mobile and Web Perform...

 


°øÁö»çÇ×
´º½ºÁö ÇÑÀÚ Ç¥±â¿¡ ´ë¸¸½Ä À½Â÷ Ç¥±â '纽ÞÙó¢ ´Ï¿ì½ÃÁö' º´±â
º£³×ÇÁ·Ò º£³×ÀÎÅõ Áß¹® Ç¥±â 宝Ò¬ÜØÙÌ 宝Ò¬ì×öõ(ÜÄÒ¬ÜØÙÌ ÜÄ...
¹Ìµð¾î¾Æ¿ì¾î Mediaour ØÚ体ä²们 ØÚô÷ä²Ùú MO ¿¥¿À ØÚä² ØÚä²
¾Ë¸®¿ìºê Alliuv ä¹备: ä¹联êó备, ¾Ë¶ã Althle ä¹÷åìÌ
¾Ë¸®¾Ë Allial Áß¹® Ç¥±â ä¹××尔 ä¹××ì³
´ºÆÛ½ºÆ® New1st Áß¹® Ç¥±â 纽ììãæ(¹øÃ¼ Òïììãæ), N1 纽1
¿£ÄÚ½º¸ð½º : À̾¾ 'EnCosmos : EC' Áß¹® Ç¥±â ì¤ñµ
¾ÆÀ̵ð¾î·Ð Idearon Áß¹® Ç¥±â ì¤îè论 ì¤îèÖå
¹ÙÀÌ¿ÀÀÌ´Ï Bioini Áß¹® Ç¥±â ù±药研 ù±å·æÚ
¿À½ºÇÁ·Ò Ausfrom 奥ÞÙÜØÙÌ, À£ÇÁ·Ò Welfrom 卫ÜØÙÌ
¿¡³ÊÇÁ·Ò Enerfrom 额ÒöÜØÙÌ ¿¡³ÊÀ¯ºñ Eneruv 额Òöêó备
¾ËÇÁ·Ò Alfrom Áß¹® Ç¥±â ä¹尔ÜØÙÌ ä¹ì³ÜØÙÌ

 

ȸ»ç¼Ò°³ | ÀÎÀçä¿ë | ÀÌ¿ë¾à°ü | °³ÀÎÁ¤º¸Ãë±Þ¹æÄ§ | û¼Ò³âº¸È£Á¤Ã¥ | Ã¥ÀÓÇѰè¿Í ¹ýÀû°íÁö | À̸ÞÀÏÁÖ¼Ò¹«´Ü¼öÁý°ÅºÎ | °í°´¼¾ÅÍ

±â»çÁ¦º¸ À̸ÞÀÏ news@newsji.com, ÀüÈ­ 050 2222 0002, ÆÑ½º 050 2222 0111, ÁÖ¼Ò : ¼­¿ï ±¸·Î±¸ °¡¸¶»ê·Î 27±æ 60 1-37È£

ÀÎÅͳݴº½º¼­ºñ½º»ç¾÷µî·Ï : ¼­¿ï ÀÚ00447, µî·ÏÀÏÀÚ : 2013.12.23., ´º½º¹è¿­ ¹× û¼Ò³âº¸È£ÀÇ Ã¥ÀÓ : ´ëÇ¥ CEO

Copyright ¨Ï All rights reserved..