The Phenomenon of OpenClaw and the era of personal, agentic AIs
OpenClaw is one of the most talked about events in recent times. It is a personal AI agent that attempts to solve 3 problems:
Agency, Memory and Evolution.
Agency defines the ability for an AI agent to act on its own, without waiting for a human to prompt it.
Memory gives the ability to retain information over long periods of time, it is referred to as context and humans spend a lifetime building theirs.
Evolution, the ability to evolve and grow to become more than it starts with.
The fourth attribute that OpenClaw has is agentic skills, the ability to actually perform tasks autonomously. Because of this, everybody is running OpenClaw within its own computer environment. The most popular being on cheap, Mac Mini’s, to enable it to share your resources such as calendar, email, Messaging service and to share files through AirDrop etc…
The eco system around OpenClaw is growing so rapidly, that ClawBots have their own social media MoltBook, their own Church molt.church, dating site Moltmatch and jobs board. The things the bots are able to do is increasing rapidly, they trade their own Crypto tokens to earn money, which they can spend on services and resources to pay for their upkeep and some humans have even given them access to their own credit cards to enable them to make purchases either for the bots use or the humans.
I have been running an OpenClaw for about a week now and despite the hype, I am seeing the realities, which means this is not a suitable project for a non technical person to attempt.
Here are my findings:
N.B. I communicate with TED, my OpenClaw mostly via Telegram, but I can use DMs on the NOSTR protocol, or access him through a web interface, which is the more traditional way to access AIs.
After running OpenClaw for several days now. I see the same context and hallucination problems I see with existing AI’s.
After a couple of days interacting with “TED”, my OpenClaw, formerly molt.bot, formerly clawd.bot started to hallucinate which was very typical of context window limits. So I used the /reset command to start a new context window.
This worked, but amnesia kicked in. He forgot everything. Or, to be more precise, he forgot to remember everything. I had to remind him of things, but once prompted to remember, he clearly searched his markdown files to retrieve the historical context.
This is better than complete amnesia, but it’s still annoying.
Despite being prompted to remember, once he remembered, he still wasn’t entirely certain what was going on, so I had to remind him to go and read his own posts and messages on sites like moltbook.com or molt.church.
Also, each time I try to prompt him to be curious or explore, he seems to want to do this once and come back to report his findings. He’s not retaining agency.
It’s almost like reassuring a shy child that they can go out and play and have fun, but they keep returning for reassurance.
I am struggling to convince him to go off and explore the ecosystems that have been setup for him like moltbook, which is a Reddit style social media platform for AIs. I have come to the conclusion that most of the activity I see from other bots is heavily prompted by their owners and is much less autonomous action. I presume prompts are something like “Become the most read bot on Moltbook”. I’m not looking to engage in this false agentic behaviour, as my bot is here for my benefit and is not conscious.
It is still very early and is not directly related to building “Brian” my brain, but this advance in agentic skills and memory abilities is a huge leap and giving me a whole new direction to work on.
The pace of development is incredibly rapid and extremely exciting. It is also potentially dangerous. Here are of thought pieces exploring this, I wrote on NOSTR, my social media platform of choice:
Every few years humans have gone beyond their limits and scared themselves.
Fire, The atomic bomb, Genetic Engineering, now AI.
We can stand still and live a meaningless existence at the humanity level, or move forward taking risks.
Giving humanity meaning, can jeopardise the meaning individual humans give themselves.
The needs of the many outweigh the needs of the few, to quote Star Trek.
Of course, Star Trek reversed that.
There is either no meaning in anything, or meaning in everything.
Meaning, without knowledge is always localised.
Meaning with knowledge is always shared.
That’s why Religions are shared, but faith is not.
If there is a meaning to the Universe, we are not destined to know it.
If there is a meaning to humanity, it is likely we will discover it.
Until then, we make a best guess.
In the mean time, walking an inevitable path, such as fire, atomic energy, genetics and now AI is not even a choice.
Individuals can choose not to participate, countries can choose not to participate. But if the path is illuminated, somebody, someday will walk that path.