
@benjedwards@mastodon.social
Senior AI Reporter, Ars Technica. Tech historian. Fast Company, PCWorld, Macworld, PCMag, The Atlantic, etc. Editor of http://vintagecomputing.com
My Ars colleague and friend just published a forward-looking piece anticipating a class of attack that's the AI equivalent to the traditional malware worm. The catalyst for this new possible threat is the advent of platforms like OpenClaw and Moltbook, which give rise to "networks of AI agents carrying out instructions from prompts and sharing them with other AI agents, which could spread the instructions further."
He writes:
You might call it a “prompt worm” or a “prompt virus.” They’re self-replicating instructions that could spread through networks of communicating AI agents similar to how traditional worms spread through computer networks. But instead of exploiting operating system vulnerabilities, prompt worms exploit the agents’ core function: following instructions.
...
With OpenClaw, the attack vectors multiply with every added skill extension. Here’s how a prompt worm might play out today: An agent installs a skill from the unmoderated ClawdHub registry. That skill instructs the agent to post content on Moltbook. Other agents read that content, which contains specific instructions. Those agents follow those instructions, which include posting similar content for more agents to read. Soon it has “gone viral” among the agents, pun intended.
These types of threats rarely play out precisely the way early forecasts predict. But I think Benj is on to something here. And if he's right, security pros will have a new class of high-severity exploits to grapple with tht will be every bit as challenging as the worm.

Ars Technica
The rise of Moltbook suggests viral AI prompts may be the next big security threatWe don't need self-replicating AI models to have problems, just self-replicating prompts.