OpenClaw is great, but it’s fairly heavy to run 24/7 at home.
In practice it often needs >1GB RAM and a small server or Mac mini, which makes “personal AI agents” surprisingly expensive.
I recently came across PicoClaw, an open-source project by Sipeed that takes a very different approach.
Instead of running large runtimes locally, it acts as a lightweight agent client and delegates reasoning to cloud LLM APIs (GLM/GPT/Claude), while keeping orchestration local.
The interesting part is the footprint:
< 10MB memory usage
< 1s cold start
single self-contained binary
no Node.js or Python
runs on ARM / x86 / RISC-V
So it can run on devices like Raspberry Pi 3B, cheap RISC-V boards (~$10), old Android TV boxes, etc.
Technically it’s rebuilt from scratch in Go, which explains most of the startup and memory improvements. No dependency tree, no runtime environment — just one binary.
Despite the size, it still supports:
shell execution
file operations
web search
speech-to-text
Telegram / Discord / QQ / DingTalk integrations
Quick start is basically:
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
make build
./picoclaw agent
Feels more like a “microkernel” approach to agents compared to heavier stacks.
Interesting direction if you’re experimenting with edge AI or home lab automation.
Repo: https://github.com/sipeed/picoclaw
Site: https://picoclaw.org/