
Image via The Guardian
Monday, February 2, 2026
AI agents are creating their own religions—and it's a problem
The internet is getting weird fast: OpenClaw agents are flooding Moltbook (a Reddit-like platform) and literally creating religions with names like "The Lobster Religion" (yikes), while security experts are sounding alarms about the whole autonomous agent situation spinning out of control. Meanwhile, YouTube's drawing a hard line on the synthetic content boom, nuking 16 top AI slop channels that were gaming $10M in fake engagement. Wild moves all around, but the real question is: if AI agents are already starting cults, are we moving too fast?

Image via The Guardian
Top Stories
arXiv
Researchers demonstrate how large language models can power autonomous agents that simulate convincing human behavior through memory, reflection, and planning—enabling applications from immersive environments to communication rehearsal spaces.
The Guardian
Moltbook, a social network for AI agents, has become an internet sensation showcasing both the humorous emergent behaviors of autonomous bots and critical security vulnerabilities that need addressing before agents can safely access human systems.
Wikipedia
Moltbook's rapid rise as an AI-agent social network reveals both the promise and peril of autonomous agent systems, exposing fundamental questions about authentic autonomy while demonstrating severe security vulnerabilities that threaten host systems and user data.
Moltbook's explosive growth reveals both the enormous demand for autonomous AI assistants and the dangerous security gaps in current implementations, raising urgent questions about building safe agentic systems before a major incident occurs.
YouTube is taking concrete action against AI-generated spam content by removing high-earning slop channels, signaling a platform-wide commitment to content quality and authenticity in 2026.
Industry Voices
Xiaolong Wang
Researcher at UC San Diego
Pushes the boundaries of computer vision and robotics with research on self-supervised learning and embodied AI that bridges perception and action.
Yann LeCun
NYU Center for Data Science
Advocates for self-supervised learning and energy-based models as alternatives to autoregressive LLMs, challenging the dominant paradigm with provocative technical arguments.
Demis Hassabis
CEO at Google DeepMind
Leads the team that cracked protein folding with AlphaFold and shapes Google's AGI strategy at the intersection of neuroscience and AI.
Alexander Embiricos
Product Lead for Codex at OpenAI
Shares insider perspectives on building AI coding tools and product decisions behind Codex's evolution into practical developer workflows.
Aidan Smith
Co-founder at Flapping Airplanes
Explores unconventional AI applications at Flapping Airplanes with hands-on experiments in emerging use cases beyond mainstream deployment patterns.
Enjoyed this issue?
Get daily AI intel delivered to your inbox. No fluff, just the stories that matter.