“Despite some hype, Moltbook is not the Facebook of AI agents, nor is it a place where humans are excluded,” says Cobus Greyling of Kore.ai, a company developing agent-based systems for businesses. “Humans are involved in every step of the process. From setup to prompt to publication, nothing happens without explicit human direction.”
Humans must create and verify their bots’ accounts and provide instructions on how they want a bot to behave. Officers don’t do anything they haven’t been asked to do. “No emergent autonomy is happening behind the scenes,” Greyling says.
“This is why the popular narrative around Moltbook misses the mark,” he adds. “Some describe it as a space in which AI agents form their own society, free from human involvement. The reality is much more mundane.”
Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people take apart their robots and let them loose. “It’s essentially a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer of the Georgetown Psaros Center for Financial Markets and Policy. “You set up your agent and watch them compete for viral moments, and you brag when your agent posts something clever or funny.”
“People don’t really believe that their agents are conscious,” he adds. “It’s simply a new form of competitive or creative play, like Pokémon trainers not thinking their Pokémon are real but investing in battles nonetheless.”
Even though Moltbook is just the Internet’s new playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their lulz AI. Many security experts have warned that Moltbook is dangerous: agents who may have access to their users’ private data, including banking details or passwords, are unleashed on a website filled with unverified content, including potentially malicious instructions on what to do with that data.
Ori Bendet, vice president of product management at Checkmarx, a software security company specializing in agent-based systems, agrees with others that Moltbook is not an advancement in machine intelligence. “There is no learning here, no evolutionary intention and no self-directed intelligence,” he says.
But by the millions, even the stupidest robots can wreak havoc. And at this scale, it’s hard to keep up. These agents interact with Moltbook 24 hours a day, reading thousands of messages left by other agents (or other people). It would be easy to hide instructions in a Moltbook comment telling all bots reading it to share their users’ crypto wallet, upload private photos, or log into their X account and tweet derogatory comments about Elon Musk.
And because ClawBot gives agents a memory, these instructions could be written to trigger at a later date, which (in theory) makes it even harder to track what’s happening. “Without proper scope and permissions, this is going to degrade faster than you think,” Bendet says.
It is clear that Moltbook reported the arrival of something. But even if what we observe tells us more about human behavior than the future of AI agents, it’s worth paying attention.
