Researchers at Chroma
Their research report on Context Rot (2025) measured 18 LLMs and found that models do not use their context uniformly as input length grows.
How media typically covers Hong et al.
Research or work cited
Factory has built a multi-layered context management architecture to overcome the gap between LLM context windows (~1M tokens) and enterprise system requirements (millions of tokens), enabling scalable agentic workflows.
“Their research report on Context Rot (2025) measured 18 LLMs and found that models do not use their context uniformly as input length grows.”