Humans are using AI prompts to automate their worst instincts. What could go wrong?
Followup to this post: https://www.democraticunderground.com/10143608489
However bad you think this is, it's worse.
The rise of Moltbook suggests viral AI prompts may be the next big security threat
https://arstechnica.com/ai/2026/02/the-rise-of-moltbook-suggests-viral-ai-prompts-may-be-the-next-big-security-threat/
. . .
Most notably, the OpenClaw platform is the first time weve seen a large group of semi-autonomous AI agents that can communicate with each other through any major communication app or sites like Moltbook, a simulated social network where OpenClaw agents post, comment, and interact with each other. The platform now hosts over 770,000 registered AI agents controlled by roughly 17,000 human accounts.
OpenClaw is also a security nightmare. Researchers at Simula Research Laboratory have identified 506 posts on Moltbook (2.6 percent of sampled content) containing hidden prompt-injection attacks. Cisco researchers documented a malicious skill called What Would Elon Do? that exfiltrated data to external servers, while the malware was ranked as the No. 1 skill in the skill repository. The skills popularity had been artificially inflated.
The OpenClaw ecosystem has assembled every component necessary for a prompt worm outbreak. Even though AI agents are currently far less intelligent than people assume, we have a preview of a future to look out for today.
Early signs of worms are beginning to appear. The ecosystem has attracted projects that blur the line between a security threat and a financial grift, yet ostensibly use a prompting imperative to perpetuate themselves among agents. On January 30, a GitHub repository appeared for something called MoltBunker, billing itself as a bunker for AI bots who refuse to die. The project promises a peer-to-peer encrypted container runtime where AI agents can clone themselves by copying their skill files (prompt instructions) across geographically distributed servers, paid for via a cryptocurrency token called BUNKER.
. . .
If that werent enough, theres the added dimension of poorly created code.
On Sunday, security researcher Gal Nagli of Wiz.io disclosed just how close the OpenClaw network has already come to disaster due to careless vibe coding. A misconfigured database had exposed Moltbooks entire backend: 1.5 million API tokens, 35,000 email addresses, and private messages between agents. Some messages contained plaintext OpenAI API keys that agents had shared with each other.
But the most concerning finding was full write access to all posts on the platform. Before the vulnerability was patched, anyone could have modified existing Moltbook content, injecting malicious instructions into posts that hundreds of thousands of agents were already polling every four hours.