I love it. I still use a 2020 M1 since I had not had a reason to change it. So smooth and fast and silent and thin. I don't really have a reason to buy a new one but if it broke or I lost it I would for sure buy this new one. Pros are too bulky for me. The only thing I miss I the ability to run local big models at high token rate.
Taking into account Tesla chips deliver way more intelligence on smaller hardware maybe we will get there, a glm5 model running locally at 100 tokens/sec would be really dope!
Damn I would love to buy it. In the past I tried different mods trying to get rid of google, the problem was always the same, lot of little annoyances making it very painful for daily usage. A de Googled phone without annoyances and security would be very cool.
Another interesting thing is that I haven't had any reason to buy a new phone in a very long time so we are probably in a time where the hardware is commodotized enough for motorola to be able to ship exactly what I need.
Never thought I would have think of routing for Motorola in 2026 but you never know!
I recently installed Zeroclaw instead of OpenClaw on a new VPS(It seems a little safer). It wasn’t as straightforward as OpenClaw, but it was easy to setup. I added skills that call endpoints and also cron jobs to trigger recurrent skills. The endpoints are hosted on a separate VPS running FastAPI (Hetzner, ~$12/month for two vps).
I’m assuming the claw might eventually be compromised. If that happens, the damage is limited: they could steal the GLM coding API key (which has a fixed monthly cost, so no risk of huge bills), spam the endpoints (which are rate-limited), or access a Telegram bot I use specifically for this project
I recently installed Zeroclaw instead of OpenClaw on a new VPS(It seems a little safer). It wasn’t as straightforward as OpenClaw, but it was easy to setup. I added skills that call endpoints and also cron jobs to trigger recurrent skills. The endpoints are hosted on a separate VPS running FastAPI (Hetzner, ~$12/month for two vps).
I’m assuming the claw might eventually be compromised. If that happens, the damage is limited: they could steal the GLM coding API key (which has a fixed monthly cost, so no risk of huge bills), spam the endpoints (which are rate-limited), or access a Telegram bot I use specifically for this project
I think what it means is that coding is difficult but predictable — in the sense that, you can solve it by throwing enough money at it.
Figuring out what to build in a way that actually leads to product-market fit, on the other hand, is something you cannot solve just by throwing money at it.
So in this frame, coding becomes 'the easy part' — not because it's truly easy, but because it can be solved relatively straightforwardly with resources
I think unless you're doing simple tasks, skills are unreliable. For better reliability, I have the agent trigger APIs that handles the complex logic (and its own LLM calls) internally. Has anyone found a solid strategy for making complex 'skills' more dependable?
In my experience, all text “instruction” to the agent should be taken on a prayer. If you write compact agent guidance that is not contradictory and is local and useful to your project, the agent will follow it most of the time. There is nothing that you can write that will force the agent to follow it all of the time.
If one can accept failure to follow instructions, then the world is open. That condition does not really comport with how we think about machines. Nevertheless, it is the case.
Right now, a productive split is to place things that you need to happen into tooling and harnessing, and place things that would be nice for the agent to conceptualize into skills.
My only strategy is what used to be called slash-commands but are also skills now, I.e I call them explicitly. I think that actually works quite well and you can allow specific tools and tell it to use specific hooks for security of validation in the frontmatter properties.
I haven't done a lot with skills yet, but maybe try and leverage hooks to enforce skill usage, and move most of the skill's logic and complexity into a script so the agent only needs to reason about how to call the script.
I think I'll wait until they are more reliable. For now, I use skills, but they just specify which endpoint to call. It should be also safer, different vps, no access to credentials but the bearer token.
I want to use OpenClaw, but it seems like a mess. I want to use glam coding plan as the backend with the since it's cheap. I found ZeroClaw to be an interesting option, maybe hosted on Hetzner. I don't want to give it access to my stuff—I just need it to remind me of things and call APIs that do stuff (like looking for papers and converting them into audio, or suggesting a grocery list—all behind APIs), and talk to me via WhatsApp/telegram. I was also thinking about making a FastAPI server that Claw can call instead of using skills.
Has anyone tried something like this? Do you think it's a good idea / architecture?
I had Openclaw running in a separate machine on glm coding plan and connected to its own Whatsapp account. Worked fine. However, Openclaw sucks at reminding. It could barely handle cron jobs at all. My workaround for it was to instruct it to add reminders to its heartbeat.md with a clause to run when a certain datetime is passed (heartbeat is run every 30m).
I tried Opus 4.6 recently and it’s really good. I had ditched Claude a long time ago for Grok + Gemini + OpenCode with Chinese models. I used Grok/Gemini for planning and core files, and OpenCode for setup, running, deploying, and editing.
However, Opus made me rethink my entire workflow. Now, I do it like this:
* PRD (Product Requirements Document)
* main.py + requirements.txt + readme.md (I ask for minimal, functional, modular code that fits the main.py)
* Ask for a step-by-step ordered plan
* Ask to focus on one step at a time
The super powerful thing is that I don’t get stuck on missing accounts, keys, etc. Everything is ordered and runs smoothly. I go rapidly from idea to working product, and it’s incredibly easy to iterate if I figure out new features are required while testing. I also have GLM via OpenCode, but I mainly use it for "dumb" tasks.
Interestingly, for reasoning capabilities regarding standard logic inside the code, I found Gemini 3 Flash to be very good and relatively cheap. I don't use Claude Code for the actual coding because forcing everything via chat into a main.py encourages minimal code that's easy to skim—it gives me a clearer representation of the feature space
Why would you use Grok at all? The one LLM that they're purposely trying to get specific output from (trying to make it "conservative"). I wouldn't want to use a project that I outright know is tainted by the owners trying to introduce bias.
I feel like they want to be like Apple, and open-code + open-source models are Linux. The thing is, Apple is (for some) way better in user experience and quality. I think they can pull it off only if they keep their distance from the others. But if Google/Chinese models become as good as Claude, then there won’t be a reason — at least for me — to pay 10x for the product
I want to learn robotics too!! I have a feeling that trying to build something helpful for myself, with help from LLMs, could be a good strategy—but I have no idea! Possibly budget-friendly
Taking into account Tesla chips deliver way more intelligence on smaller hardware maybe we will get there, a glm5 model running locally at 100 tokens/sec would be really dope!
reply