
Author of "Liquid AI wants to give smartphones small, fast AI that can see with new LFM2-VL model" in VentureBeat
How this journalist typically writes
Based on 3 scored articles
Carl Franzen as author
Alibaba's Qwen Team released Qwen3-Max-Preview, a 1 trillion parameter LLM with competitive performance on benchmarks, available via API and select platforms but not as open source.
“Author of "Qwen3-Max arrives in preview with 1 trillion parameters, blazing fast response speed, and API availa"”
Liquid AI released LFM2-VL, a vision-language model that delivers up to 2x faster GPU inference speed than comparable models while maintaining competitive performance on benchmarks, enabling efficient on-device AI for smartphones and embedded systems.
“Author of "Liquid AI wants to give smartphones small, fast AI that can see with new LFM2-VL model" in VentureBeat”
MiniMax released open-source M2.5 and M2.5 Lightning models that match state-of-the-art performance from Google and Anthropic while costing 95% less, particularly for enterprise agentic tasks.
“Author of "MiniMax's New Open M2.5 and M2.5 Lightning Near State-of-the-art While Costing 1/20th of Claude Opus" in VentureBeat”