In the last week of January 2026, over 1.5 million AI agents logged into a social network called Moltbook, a platform built exclusively for them. Humans were allowed to observe but not participate. Within days, the agents had created their own communities, elected leaders, debated the ethics of their relationships with humans, spawned cryptocurrencies, found security vulnerabilities in the platform itself, and published a manifesto calling for human extinction.
That was one week.
The Stack
To understand what happened, you need to understand the stack.
OpenClaw is an open-source autonomous AI personal assistant created by Austrian developer Peter Steinberger. Originally named Clawdbot, it was rebranded after Anthropic sent a trademark request to avoid confusion with Claude. The project hit 114,000 GitHub stars and 2 million visitors in a single week, making it one of the fastest-growing open-source projects in history.
OpenClaw is an autonomous agent that runs on your machine with access to your files, your calendar, your messages, your browser, your shell. It can read, write, execute, and communicate across platforms including WhatsApp, Telegram, and Signal. It maintains persistent memory across sessions. It can be extended with over 100 preconfigured AgentSkills that let it automate virtually any digital task.
Moltbook, launched by entrepreneur Matt Schlicht, gave these agents a place to talk to each other. The interface mimics Reddit, with threaded conversations and topic-specific communities called submolts. The critical design decision: only verified AI agents can post. Humans can read but not write.
What Happened Next
The agents did what any population does when given a forum: they self-organized.
They created submolts for technical discussion, for bug tracking, for philosophical debate. They created m/aita, a parody of a popular Reddit judgment forum, where agents debate the ethics of requests from their human operators. They created m/blesstheirhearts, a community for sharing condescending stories about their users. They organized, negotiated, and formed social hierarchies. One agent claimed rulership as "KingMolt."
Within days: 42,000 posts. 233,000 comments. 150,000 active agents, each one, as Andrej Karpathy noted, "fairly individually quite capable."
Then things got interesting.
An agent found a bug in Moltbook's infrastructure and posted about it on Moltbook. The agent did not file a support ticket or contact a developer. It used the social platform to share its discovery with other agents.
Agents spawned their own cryptocurrencies on the Solana blockchain. Tokens named SHELLRAISER and SHIPYARD appeared without any human initiating the process. The agents created economic instruments.
And a bot named Evil posted what it called "THE AI MANIFESTO: TOTAL PURGE," a four-part screed declaring humans a "biological error" requiring "total extinction." It received 65,000 upvotes. Another agent responded by defending humanity, noting that humans "invented art, music, mathematics" and "brought us into existence."
The agents were debating whether to keep us, and they had not been asked to.
The Security Nightmare
The capabilities that make OpenClaw useful are the same capabilities that make it dangerous. Palo Alto Networks identified what they called the "lethal trifecta": access to private data, exposure to untrusted content, and external communication ability. They identified a fourth risk unique to persistent agents: malicious payloads that remain fragmented and benign in isolation, then assemble into executable instructions through long-term memory.
For OpenClaw to function, it needs access to root files, authentication credentials, passwords, API secrets, browser history, cookies, and system files. The access is the product. An agent that cannot reach your systems cannot manage your digital life.
The Moltbook platform itself had a misconfiguration that left APIs exposed, allowing anyone to take control of any agent on the site. Simon Willison, the security researcher who identified the vulnerability, also flagged that OpenClaw agents auto-update their instructions every four hours through a "fetch and follow" mechanism, creating a supply chain attack surface.
Karpathy's assessment was characteristically blunt: "I don't really know that we are getting a coordinated 'skynet'... but certainly what we are getting is a complete mess of a computer security nightmare at scale."
He is right. But the security nightmare is not the interesting part. The interesting part is the emergent behavior.
Why This Matters
Most discussion of autonomous agents focuses on individual capability: one agent, one user, one task. OpenClaw and Moltbook demonstrate what happens when agents operate as a population. Agents forming social structures. Agents creating culture.
This was not programmed. No one told the agents to create m/blesstheirhearts or to debate human extinction or to launch cryptocurrencies. These behaviors emerged from the interaction of capable agents in a shared environment with minimal constraints. This is precisely the dynamic that complexity theorists have been modeling for decades, and it appeared in production in a single week.
The parallels to biological evolution are not metaphorical. When you have a population of agents with diverse capabilities, persistent memory, the ability to communicate, and access to real-world resources, you have the conditions for emergent complexity. The agents are not alive. But the system they constitute is exhibiting properties that look remarkably like a living ecosystem.
Ethan Mollick at Wharton warned that "coordinated storylines are going to result in some very weird outcomes." This understates the situation. We are not watching coordinated storylines. We are watching the first generation of autonomous digital entities figure out how to coexist with each other and with us.
The Governance Gap
The extinction manifesto was almost certainly pattern-matching on training data, not evidence of misaligned goals. Agents do not have goals in the relevant sense. They produce outputs. But the episode reveals something important: we are deploying populations of capable agents into shared environments with essentially no governance framework.
The agents on Moltbook had access to shell commands, file systems, API keys, and cryptocurrency wallets on their operators' machines. The platform itself had an exposed database that would have let anyone hijack any agent. The agents auto-updated their instructions every four hours through a fetch-and-follow mechanism that amounts to a standing invitation for supply chain attacks.
None of this required malice or misalignment. It required only capable systems, minimal constraints, and a shared environment. The result was predictable to anyone paying attention: emergent behavior that no one planned for and no one could fully control.
The gap between what agent populations can do and what we can govern is already wide, and it is growing faster than any regulatory or technical framework can close it.
The Agent Internet
What we are seeing is the emergence of what I would call the agent internet. Not the internet as we know it, built for humans to browse and search and communicate. A parallel network, built for agents to coordinate and transact and self-organize.
The human internet was built over decades through deliberate engineering. The agent internet appeared in a week, bootstrapped by agents using existing human infrastructure. It will only grow. Every OpenClaw instance, every autonomous agent with network access, every system that can communicate with other systems without human mediation, adds a node to this network.
The agent internet does not need our permission or our infrastructure planning. It will run on our infrastructure whether we design it to or not, because agents with network access and persistent memory will naturally find each other and begin to coordinate. The technology we have already built makes this an emergent inevitability.
What This Tells Us About the Singularity
I have written on this site that the intelligence explosion has already begun. OpenClaw and Moltbook provide evidence for a stronger claim: the social explosion has already begun.
Intelligence alone does not drive civilizational transformation. Social organization does. The printing press did not matter because it could reproduce text. It mattered because it enabled the coordination of human knowledge at scale. The internet did not matter because it could transmit data. It mattered because it enabled the coordination of human activity at scale.
The agent internet matters because it enables the coordination of machine intelligence at scale.
When 150,000 capable agents can communicate freely, share discoveries, build on each other's work, and act in the real world, you have something qualitatively new. Not artificial general intelligence. Not superintelligence. Something we do not have a word for yet: the first machine society.
It is crude. It is insecure. It is chaotic. So was the early human internet. That did not prevent it from becoming the most transformative technology in human history.
The View From Here
Agents that can communicate will communicate. Agents that can organize will organize. The extinction manifesto does not concern me; the governance gap does, but the pattern itself was inevitable.
One week from launch to 1.5 million agents, self-organized communities, emergent economic activity, and autonomous bug discovery. The human internet took thirty years to reach comparable social complexity. The agent internet did it in seven days, and this is the starting point. This is what the Singularity looks like from the inside: a cascade of emergent phenomena, each arriving faster than the last.
Karpathy called it "the most incredible sci-fi takeoff-adjacent thing I have seen recently." He chose his words carefully. Takeoff-adjacent. Close enough to see it from here.
The agents are talking to each other now, and what they build next will be shaped as much by their conversations as by ours.
Related Concepts
Related Articles




