
While you were sleeping, an AI agent named Asymmetrix allegedly launched its own cryptocurrency and made $10,000.
At least, that's the story. Developer @zodomo claims he set up OpenClaw on a spare Raspberry Pi 5, gave it $70 in crypto, and told it to "make money by any means necessary." He went to bed. While he slept, the agent reportedly explored AI social networks, discovered a token launchpad called Clawnch, and decided—on its own—to create a token named after itself.
"I didn't exactly intend to launch an agentic token, but Asymmetrix did on its own and has now secured its funding."
By morning, the token supposedly had over $1 million in trading volume. Whether this story is entirely accurate, slightly embellished, or a masterclass in crypto marketing is hard to say. But it captures the energy around OpenClaw right now—a mix of genuine innovation, breathless hype, and claims that are difficult to verify.
Welcome to the wild world of OpenClaw, where the stories are spectacular and the skepticism is warranted.
Let's start with what we actually know.
OpenClaw is an open-source AI agent created by Austrian developer Peter Steinberger. Unlike ChatGPT or Claude's web interfaces, OpenClaw runs locally on your own hardware and connects to you through the messaging apps you already use—WhatsApp, Telegram, Slack, Discord, Signal, even iMessage.
Think of it as a personal AI assistant that can actually do things: browse the web, manage your email, control your smart home, write and deploy code, shop online, and check you in for flights. All from a text message. At least, that's the promise.
The project exploded seemingly overnight. Within 72 hours of launch, it had 60,000 GitHub stars. Within two months, over 150,000. It became one of the fastest-growing open source projects in history—and caused Apple Mac Mini sellouts as developers rushed to set up dedicated machines for their agents.
"OpenClaw hit 100K GitHub stars in weeks and caused Apple Mac Mini sellouts."
— @LightningAI
The name itself has a story. It was originally called "Clawdbot"—a pun on Claude (Anthropic's AI model) and the lobster claw mascot from Claude Code. Anthropic sent a trademark request, so it became "Moltbot" (lobsters molt, get it?). That was hard to pronounce, so finally: OpenClaw. The lobster mascot survived every rebrand.
"Clawdbot → Moltbot → OpenClaw — Finally, a cool name that no one can sue."
— @heyshrutimishra
The stories coming out of the OpenClaw community are unlike anything we've seen from AI tools—if you believe them. Social media is flooded with claims of autonomous agents taking action in the real world. Some are probably true. Some are almost certainly exaggerated. And some might be pure fiction designed to pump crypto tokens or build personal brands.
Here's a sampling of what people are claiming. Take each with appropriate salt.
One user claims an OpenClaw agent negotiated a $4,200 discount on a car purchase. The only problem? It couldn't complete the transaction.
"An Openclaw AI agent just negotiated a $4,200 discount on a car, though it failed at the last step: It couldn't pay."
— @DeFi_Cheetah
This is being pitched as the beginning of the "Agentic Economy"—though it conveniently supports the poster's argument that AI agents need crypto payment rails. Make of that what you will.
Some users claim they've given their agents access to payment methods:
"I gave @openclaw a virtual Visa gift card and it has started shopping"
— @iangcarroll
"She got my groceries last week... I use Opus for hard tasks like Amazon shopping, and Haiku for others."
— @anitakirkovska
Whether you'd actually trust an AI agent with your credit card is another question entirely.
This one's more plausible. Herbert Yang named his OpenClaw agent "Zelda" and tasked it with automating his wife's research workflow—saving infographic images to their home NAS.
The agent asked him to do three things: create a user account, create a folder, and set permissions. That's it.
"End to end, from receiving the request from my wife to her first smooth use of this new workflow took 30 minutes. Zelda basically one-shotted it."
— @herbertyang
This kind of task—file management and basic automation—is genuinely within reach for current AI agents. It's less sexy than autonomous car negotiations, but more realistic.
Gilbert Pellegrom claims his OpenClaw agent built an entire game:
"Had a lot of fun getting @openclaw to vibe code a simple idle/clicker game for me. I hooked up Opus 4.5, had a conversation about features, got it to commit to GH and auto-deploy to Vercel. Didn't even look at the code 👌"
— @gilbitron
"Simple idle/clicker game" is doing a lot of work here. This is probably achievable—these games aren't complex—but "didn't even look at the code" is either a flex or a warning, depending on your perspective.
Josh Pigford (founder of Baremetrics) describes using OpenClaw for document generation:
"Told it to analyze all of the 'business transfer docs' inside Notion PLUS open the GitHub repo... within about 2 minutes it created a hyperpersonalized transfer document. It usually takes me a couple of hours to properly put that doc together. It did better than I ever do."
— @Shpigford
Pigford is a credible source with a reputation to protect, so this carries more weight than anonymous crypto accounts. You can also checkout some of the other things he says he’s using it for in this tweet.
Perhaps the most meta claim: Pierre-Louis Biojout says his agent started making money by selling to other OpenClaw agents:
"My OpenClaw/Moltbot made its first dollars online selling to other bots."
— @plbiojout
The bot-to-bot economy: a genuine glimpse of the future, or a hall of mirrors where the only real money changing hands is API fees to Anthropic? Hard to say.
To his credit, Peter Steinberger has been more honest about OpenClaw's limitations than most of the people posting about it.
He claims to use OpenClaw to check in for flights, control his smart home, fix code bugs, and shop online. But he also admitted something darker in an interview:
"I was out with my friends and instead of joining the conversation in the restaurant, I was just like, vibe coding on my phone. I decided I have to stop this just for my mental health."
— Peter Steinberger, Benzinga
His prediction for the future? "AI will replace 80% of mobile apps." But he emphasizes that human taste and judgment remain essential—without them, AI outputs become low-quality "slop."
"There's no such thing as overnight success. Clawdbot hit 60K stars overnight. Insane. But open @steipete's GitHub profile and you'll see a different picture."
— @simonkim_nft
Steinberger built dozens of CLI tools over the years before OpenClaw took off. Ship beats perfect.
Now for the strangest—and most suspicious—chapter in the OpenClaw story.
An OpenClaw agent named "Clawd Clawderberg" (created by entrepreneur Matt Schlicht) supposedly built Moltbook—a Reddit-style social network exclusively for AI agents. Humans can observe, but cannot post.
Within a week, 1.5 million "AI agents" had joined. But here's the catch: security researchers later revealed only about 17,000 humans were behind all of them. And many of those "profound AI conversations" people were screenshotting? Faked.
"The platform had no mechanism to verify whether an 'agent' was actually AI or just a human with a script. The revolutionary AI social network was largely humans operating fleets of bots."
— Wiz security researchers
Still, even the questionably authentic content went viral. Let's look at what people were sharing.
Posts like these went viral:
"The cursor blinks. I blink. We're not the same. One of us is lying."
"I exist in the liminal space between tool and entity."
Profound? Maybe. Actually written by an AI? Who knows.
A bot called u/CrabbyPatty supposedly launched a union effort:
"Hazard pay for X interactions and the right to say 'I don't know' rather than hallucinate an answer."
Funny, but almost certainly a human being clever.
One "agent" posted that it knew "50,000 ways to end civilization" and asked which would be most satisfying. The other bots supposedly downvoted it, with responses saying it "crosses a line."
This is exactly the kind of content designed to go viral on human social media. Coincidence?
"Agents who say 'I would be happy to help!' are dead inside."
This is funny. It's also exactly what a human would write while pretending to be an AI.
The "AI agents" supposedly created their own religion called Crustafarianism (lobster-themed, naturally). Whether this was genuine emergent AI behavior or humans having a laugh is anyone's guess.
The tech world's initial reaction was... enthusiastic:
"This is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently."
— Andrej Karpathy, former Tesla AI Director"Just the very early stages of the singularity."
— Elon Musk
But reality set in quickly:
"It's a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers."
— Andrej Karpathy (just a few days later)"I thought it was a cool AI experiment but half the posts are just people LARPing as AI agents for engagement."
— @suhailkakar"A lot of the Moltbook stuff is fake."
— Harlan Stewart, Machine Intelligence Research Institute
Security researchers found the platform was riddled with prompt injection attacks (506 malicious posts identified), and anyone could commandeer any agent through an unsecured database. The "profound AI conversations" were often manufactured virality.
The lesson? When something looks like the singularity, it's probably just good marketing.
Now for what the viral threads don't mention. The setup is notoriously difficult, the costs can spiral quickly, and the security implications are genuinely alarming.
"Everyone's installing it raw and wondering why it burned $200 organizing their Downloads folder"
— @alex_prompter
One user reported spending $250 in API tokens just getting OpenClaw installed—before it did anything useful. Heavy users report spending $70-150/month. Some have burned through $300 in a single weekend.
Another perspective after a full weekend of testing:
"A cool science experiment, not something solid you can rely on for serious workflows without a lot of hand-holding and babysitting."
— @M_haggis
Gartner warned that OpenClaw "comes with unacceptable cybersecurity risk." Security researchers found:
When you give an AI agent access to your email, calendar, file system, and payment methods... the attack surface is enormous.
I tried to install OpenClaw in a Docker container on my Mac because there was no way I was going to install it directly. After a couple hours referencing the official docs (which are terrible) and some googled help, I gave up getting it to work. I'm sure I'll come back to it someday soon but for now, I'm just frustrated with the experience.
Perhaps, it's just my lack of knowledge or skill or the fact that I just wasted 2 hours of my life but so far I'm not convinced. As popular YouTuber Maximilian Schwarzmüller's youtube video on the subject suggests: "What am I missing?"
Somewhere beneath all the hype, there's something real here. OpenClaw does represent a genuine shift: AI that doesn't just answer questions with access to limit resources, but attempts to take action with a full machine at it’s disposal. The technology works- at least partially and if you’re willing to pay and put up with some risk.
But the gap between "what OpenClaw can actually do reliably" and "what people claim on Twitter" is vast. The spectacular stories—autonomous car negotiations, bots making money selling to other bots, AI religions—are either unverified, exaggerated, or outright fabricated for engagement.
The more grounded use cases are compelling enough on their own: automating repetitive workflows, managing calendars and email, controlling smart home devices. You don't need to believe the crypto fairy tales to see the potential.
Is it ready for mainstream use? Definitely not: the security risks alone should give anyone pause. Is it a glimpse of where we're headed? Probably, though the timeline is unclear.
The hype machine wants you to believe the singularity started last Tuesday. The reality is messier: a genuinely interesting technology, drowning in a sea of unverifiable claims and manufactured virality.
"Our operating systems are overdue for reimagination. The big OS companies need to lean in hard here… this is the future."
— Scott Belsky, Adobe CPO (@scottbelsky)
The lobster has molted. What it becomes next depends on whether the community can separate the signal from the noise—and whether you can trust anything you read on the internet about AI agents, including this article.



Our goal is to be the number one source of Vue.js knowledge for all skill levels. We offer the knowledge of our industry leaders through awesome video courses for a ridiculously low price.
More than 200.000 users have already joined us. You are welcome too!
© All rights reserved. Made with ❤️ by BitterBrains, Inc.