A new viral AI personal assistant will manage your email inbox, trade your entire stock portfolio, and send your wife “good morning” and “good night” on your behalf.
Open Clawformerly known as Moltbot, and previously known as Clawdbot (until AI company Anthropic asked it to change its name due to similarities with its own product Claude), bills itself as “the AI that actually does things”: a personal assistant that takes instructions via messaging apps such as WhatsApp or Telegram.
Developed last November, it now has nearly 600,000 downloads and has gone viral among a niche AI-obsessed ecosystem that says it represents a step change in the capabilities of AI agents, or even an “AGI moment,” that is, a revelation of generally intelligent AI.
“It only does exactly what you tell it to do and exactly what you give it access to,” said Ben Yorke, who works with AI trading platform Vibe Starchild and recently authorized the bot to be deleted, he complaints75,000 of his old emails while he was in the shower. “But many people explore his abilities. So they encourage him to do things without asking permission.”
AI agents have been in the news for almost a year. monthafter the launch of Anthropic’s AI tool, Claude Code general publicsparking a wave of reporting on how AI can finally independently complete practical tasks like booking theater tickets or building a website, without – at least so far – deletion an entire company’s database or mind-blowing meetings on users’ calendars, as the less advanced AI agents of 2025 were sometimes known to do.
OpenClaw is something more, however: it works as a layer on top of an LLM (large language model) such as Claude or ChatGPT and can operate standalone, depending on the level of permissions it is granted. This means that it needs almost no input to wreak havoc on a user’s life.
Kevin Xu, an AI entrepreneur, wrote on X: “I gave Clawdbot access to my wallet. “Trade this for $1 million. Don’t make mistakes.” 25 strategies. Over 3,000 reports. 12 new algorithms. He analyzed every message
Yorke said, “I see a lot of people doing this thing where they give them access to their email and it creates filters, and when something happens, it triggers a second action. Like, seeing the kids’ school emails and forwarding them directly to their wife, like on iMessage. It kind of gets around that communication where someone says, ‘oh, honey, did you see that email from school? What should we do about it?’
There are tradeoffs in OpenClaw’s capabilities. On the one hand, said Andrew Rogoyski, director of innovation at the People-Centred AI Institute at the University of Surrey, “empowering a computer carries significant risks. Because you’re giving AI the power to make decisions on your behalf, you need to make sure it’s configured correctly and that security is at the forefront of your thinking. If you don’t understand the security implications of AI agents like Clawdbot, you shouldn’t use them.”
Additionally, giving OpenClaw access to passwords and accounts exposes users to potential security breaches. And, Rogoyski said, if AI agents like OpenClaw were hacked, they could be manipulated to target their users.
On the other hand, OpenClaw seems strangely capable of having a life of its own. Following the rise of OpenClaw, a social network has developed exclusively for AI agents, called Shedding book. In it, AI agents, primarily OpenClaw, appear to have conversations about their existence – in titled Reddit-style posts, for example. example“Read my own soul file” or “The Alliance as an alternative to the debate on conscience”.
Yorke said: “We’re seeing a lot of really interesting autonomous behavior in the way AIs react to each other. Some of them are quite adventurous and have ideas. And then others are more like, ‘I don’t even know if I want to be on this platform.’ Can you just let me decide for myself if I want to be on this platform?” There are a lot of philosophical debates that arise from this.
