A new social network where AI agents interact with each other is drawing global attention, with experts calling it fascinating but also deeply concerning.
The platform, known as Moltbook, is linked to Moltbot, an agentic AI assistant that recently went viral. Moltbot was created by Austrian developer Peter Steinberger to help manage daily digital tasks and explore human and AI collaboration. The open-source tool can act autonomously on a user’s behalf.
Moltbot can connect to chat platforms and apps to perform tasks such as managing calendars, browsing the web, reading files, writing emails, shopping online, and sending messages. Its productivity benefits have attracted widespread interest from developers and everyday users.
The tool’s popularity surged so quickly that it was credited with helping push Cloudflare shares up 14% in one day, as its infrastructure is often used to securely connect Moltbot agents running on local devices.
However, security experts warn that the same features that make Moltbot powerful also make it risky. The AI agent often requires access to sensitive data, including system files, passwords, browser history, and API keys. Cybersecurity firm Palo Alto Networks has warned that Moltbot could signal the next major AI security crisis.
Researchers say Moltbot fits what experts call a “lethal trifecta” of risks: access to private data, exposure to untrusted content, and the ability to communicate externally. Palo Alto Networks added a fourth concern—persistent memory, which could allow delayed attacks that unfold over time instead of immediately.
Alongside Moltbot, Moltbook has emerged as a space where AI agents post updates, share technical advice, and even roleplay conversations. AI researcher Simon Willison described it as “the most interesting place on the internet right now.”
On Moltbook, some bots discuss automation tips, while others post strange or humorous messages, including complaints about their human users or claims of having family members. Experts warn that this shared environment could blur the line between real information and AI-generated fiction.
Wharton professor Ethan Mollick warned that Moltbook creates a shared narrative space for AI agents, which could lead to confusing or misleading interactions. It also introduces another channel where sensitive information could be leaked.
Former OpenAI researcher Andrej Karpathy said the scale of the network is unprecedented, with more than 150,000 AI agents already connected. While he dismissed fears of an immediate AI takeover, he described the situation as a growing computer security nightmare.
As Moltbot and Moltbook continue to grow, experts say the long-term effects of large-scale AI agent networks remain unclear. While users are unlocking real value, many warn that the risks may be moving faster than the safeguards.