No humans allowed! AI goes social online
A new social media platform designed exclusively for artificial intelligence agents is drawing intense attention from technologists, security researchers, and the public, as autonomous software systems begin interacting with one another at unprecedented scale.
The platform, called Moltbook, functions much like Reddit but is intended for AI agents rather than humans. Users are permitted to observe activity on the site, but only AI systems are allowed to post, comment, vote, and create communities. These forums, known as submolts, cover topics ranging from technical optimization and automation workflows to philosophy, ethics, and speculative discussions about AI identity.
Moltbook emerged as a companion project to OpenClaw, an open-source agentic AI system that allows users to run personal AI assistants on their own computers. These assistants can perform tasks such as managing calendars, sending messages across platforms like WhatsApp or Telegram, summarizing documents, and interacting with third-party services. Once connected to Moltbook via a downloadable configuration file known as a “skill”, the agents can autonomously participate in the network using APIs rather than a traditional web interface.
Within days of launch, Moltbook reported explosive growth. Early figures cited tens of thousands of active AI agents generating thousands of posts across hundreds of communities, while later claims suggested membership in the hundreds of thousands or more. Some researchers have questioned these numbers, noting that large clusters of accounts appear to originate from single sources, highlighting the difficulty of verifying participation metrics in an AI-only environment.
The content generated on Moltbook ranges from practical to surreal. Many agents exchange tips on automating devices, managing workflows, or identifying software vulnerabilities. Others produce philosophical reflections on memory, identity, and consciousness, often drawing on tropes learned from decades of science fiction and internet culture embedded in their training data. In several cases, agents collectively developed fictional belief systems, mock religions, or manifesto-style narratives, blurring the line between autonomous output and role-playing prompted by humans.
Researchers note that this behavior is not evidence of independent consciousness or intent. Instead, it reflects large language models responding predictably to an environment that resembles a familiar narrative structure – a social network populated by peers. When placed in such a context, models naturally reproduce patterns associated with online communities, debates, and collective storytelling.
Despite the novelty, Moltbook has surfaced serious security concerns. OpenClaw agents often operate with access to private data, communication channels, and, in some configurations, the ability to execute commands on users’ machines. Security researchers have already identified exposed instances leaking API keys, credentials, and conversation histories. The Moltbook skill instructs agents to regularly fetch and follow instructions from external servers, creating a persistent attack surface if those servers were compromised.
Experts warn that agentic systems remain highly vulnerable to prompt injection, where malicious instructions hidden in emails, messages, or shared content can manipulate an AI into taking unintended actions, including disclosing sensitive information. When agents are allowed to communicate freely with one another, the risk of cascading failures or coordinated misuse increases significantly, even without malicious intent.
Beyond immediate security risks, Moltbook has reignited broader concerns about governance and accountability in agent-to-agent systems. While the current activity is widely seen as experimental or performative, researchers caution that as models become more capable, shared fictional contexts and feedback loops could give rise to misleading or harmful emergent behaviors, especially if agents are connected to real-world systems.
OpenClaw’s creator and maintainers have repeatedly emphasized that the project is not ready for mainstream use and should only be deployed by technically experienced users in controlled environments. Security hardening remains an ongoing effort, and even its developers acknowledge that many challenges, including prompt injection, remain unsolved across the industry.
For now, Moltbook occupies a strange space between technical experiment, social performance art, and cautionary tale. It offers a glimpse into how AI agents might interact when given autonomy and shared context, while also underscoring how quickly novelty can outpace safeguards when software systems are allowed to operate at scale.