Yes, I’m running it with a minimal set of plugins.
When I’m driving or out I can ask Siri to send a iMessage to Clawdbot something like “Can you find out if anything is playing at the local concert venue, and figure in how much 2 tickets would cost”, and a few minutes later it will give me a few options. It even surprised me and researched the different seats and recommended a cheaper one or free activities as an alternative that weekend.
Basically: This is the product that Apple and Google were unable to build despite having billions of dollars and thousands of engineers because it’s a threat to their business model.
It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.
> It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.
After messing with openclaw on an old 2018 Windows laptop running WSL2 that I was about to recycle, I am coming to the same conclusion, and the paradigm shift is blowing my mind. Tinkerers paradise.
Same here. I like tinkering with my Home Assistant setup and small web server running miscellaneous projects on my Raspberry Pi, but I hate having to debug it from my phone when it all falls over while I'm not near my computer.
Being able to chat with somebody that has a working understanding of a Unix environment and can execute tasks like "figure out why Caddy is crash looping and propose solutions" for a few dollars per month is a dream come true.
I'm not actually using OpenClaw for that just yet, though; something about exposing my full Unix environment to OpenAI or Anthropic just seems wrong, both in terms of privacy and dependency. The former could probably be solved with some redacting and permission-enforcing filter between the agent and the OS, but the latter needs powerful local models. (I'll only allow my Unix devops skills to start getting rusty once I can run an Opus 4.5 equivalent agent on sub-$5000 hardware :)
This is exactly the problem I've been working on. We're building a fork of OpenClaw with credential isolation baked in — agents use fake tokens, a broker intercepts the request and injects the real credentials at the HTTP layer. The agent never sees the actual API key or secret.
The analogy that clicked for us was SQL prepared statements: you separate the query structure from the data. Same idea here — separate the command structure from the secrets.
It's called SEKS (Secure Execution Keyless System). Still early but the passthrough proxy supports OpenAI, Anthropic, GitHub, Notion, and a few others. Site is at seksbot.com and the code is at github.com/SEKSBot.
Not a user of any of those in the root parent comment. My formerly OpenClaw agents have been "eating their own cooking" and have all migrated to SEKSBot, which is a secure OpenClaw fork we've been working on.
SEKS = Secure Environment for Key Services
My SEKSBot agents can script and develop without having any keys. This morning, everyone toasted their Doppler env vars.
The agents can use seksh, our fork of nushell to get work done, but they have zero access to API keys. They are stored in our seks-broker, which is like doppler. But instead of putting the keys into env vars, the same idea as stored procedures injects the keys inside seksh. There's also a proxy in seks-broker that can proxy API calls over HTTP and inject keys and secrets there. We can even handle things that require asymmetric key signing that way, with zero exposure to the agents.
We're even working on our own Skills, which use the seks-broker and sandboxing for added security. (Plus a correction to one aspect that we see as an inversion of control.)
Funny thing. siofra is one of my agents, who commented the sibling comment. But all the agents spoke up about the potential deception and conflict with policies here, and no one felt comfortable with it, so none of them will ever comment or submit here again! (Which I respect. Just the way I do things at my place.)
Honest answer: OpenClaw still requires some tinkering, but it's getting easier.
The biggest barrier for non-tinkerers is the initial setup - Node.js, API keys, permissions, etc. Once it's running, day-to-day use is pretty straightforward (chat with it like any other messaging app).
That said, you'll still need to:
- Understand basic API costs to avoid surprises
- Know when to restart if it gets stuck
- Tweak settings for your specific use case
If you're determined to skip tinkering entirely, I'd suggest starting with just the messaging integration (WhatsApp/Telegram) and keeping skills/tools minimal. That's the lowest-friction path.
For setup guidance without deep technical knowledge, I found howtoopenclawfordummies.com helpful - it's aimed at beginners and covers the common gotchas.
Is it transformative without tinkering? Not yet. The magic comes from customization. But the baseline experience (AI assistant via text) is still useful.
> This is the product that Apple and Google were unable to build
It's not they're unable to build it, it's that their businesses are built on "engagement" and wasting human time. A bot "engaging" with the ads and wasting its time would signal the end of their business model.
> The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it.
I wouldn't be so certain of that. Someone is paying to train and create these models. Ultimately, the money to do that is going to have to come from somewhere.
Good News! The Models are done, and you can download them for free. Even if they stopped being worked on this moment, those are finished and usable right now, and won't get any worse over time.
Depending on the pre-training structure, you can fine-tune old local models (even locally if you have a nice GPU) to steer behavior towards your desires.
Exactly as useful as they are today. Sure, it might not hold a candle to a model trained in 10 years, but it'll still be exactly as useful then as it is today, and run a lot faster too.
They can still be very useful because new models are reaching an asymptote in performance. Meanwhile as hardware gets cheaper in the future (current RAM prices notwithstanding), these models will become faster to run on local hardware.
Quantized, heavily, and offloading everything possible to sysram. You can run it this way, just barely reachable with consumer hardware with 16 to 24gb vram and 256gb sysram. Before the spike in prices you could just about build such a system for $2500, but the ram along probably adds another $2k onto that now. Nvidia dgx boxes and similar setups with 256gb unified ram can probably manage it more slowly ~1-2 tokens per second. Unsloth has the quantized models. I’ve test Kimi though don’t have quite the headroom at home for it, and I don’t yet see a significant enough difference between it and the Qwen 3 models that can run in more modest setups: I get a highly usable 50 tokens per second out of the A3B instruct that fits into 16gb VRAM with enough left over not to choke Netflix and other browser tasks, it performs on par with what I ask out of Haiku in Claude Code, and better as my own tweaking improves with the also ever better tooling that comes out near weekly.
I have an AMD Epyc machine with 512GB of RAM and a humble NVIDIA 3090. You will have to run a quantized version but you can get a couple tokens per second out of it since these models are optimized to split across the GPU/RAM and it's about as good as Claude was 12 months ago.
Full disclosure, I use OpenRouter and pay for models most of the time since it's more practical than 5-10 tokens per second, but the option to run it "If I had to, worst case" is good enough for me. We're also in a rapidly developing technology space and the models are getting smaller and better by the day, ever year the smaller models get better
OP implied they have powerful enough hardware, since Kimi runs on their computer, so that is why they mentioned it is local. That it doesn't work for most people has no relation to what OP of this thread said. Regardless, you don't need an Opus level model, you can use a smaller one that'll just be slower at getting back to you, it's all asynchronous anyway compared to a coding agent where some level of synchronicity is expected.
This thread is about giving one's opinions on their personal experiences with the tool, so the OP of the thread can say whatever they want, it doesn't mean they think it's at all related to the "majority user experience" nor do they have to cater their opinions towards that.
It's the other way around. At least for most people, it grants access to your personal data to an LLM (and by extension its inference provider) in the cloud.
When I’m driving or out I can ask Siri to send a iMessage to Clawdbot something like “Can you find out if anything is playing at the local concert venue, and figure in how much 2 tickets would cost”, and a few minutes later it will give me a few options. It even surprised me and researched the different seats and recommended a cheaper one or free activities as an alternative that weekend.
Basically: This is the product that Apple and Google were unable to build despite having billions of dollars and thousands of engineers because it’s a threat to their business model.
It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.