> Unless previously approved, we do not allow third party developers to offer Claude.ai login or rate limits for their products, including agents built on the Claude Agent SDK. Please use the API key authentication methods described in this document instead.
wow. ACP is used within zed so I guess zed is safe with ACP using claude code
I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?
According to Opus, ACP is designed specifically for IDE clients (with coding agent “servers”), and there’s some impedance mismatch here that would need to be resolved for one agent cli to operate as a client. I havent validated this though.
—-
1. ACP Servers Expect IDE-like Clients
The ACP server interface in Claude Code is designed for:
∙ Receiving file context from an IDE
∙ Sending back edits, diagnostics, suggestions
∙ Managing a workspace-scoped session
It’s not designed for another autonomous agent to connect and say “go solve this problem for me.”
2. No Delegation/Orchestration Semantics in ACP
ACP (at least the current spec) handles:
∙ Code completions
∙ Chat interactions scoped to a workspace
∙ Tool invocations
It doesn’t have primitives for:
∙ “Here’s a task, go figure it out autonomously”
∙ Spawning sub-agents
∙ Returning when a multi-step task completes
3. Session & Context Ownership
Both tools assume they own the agentic loop. If OpenCode connects to Claude Code via ACP, who’s driving? You’d have two agents both trying to:
∙ Decide what tool to call next
∙ Maintain conversation state
∙ Handle user approval flows
Edit: I may have been rude previously but I was feeling beyond angry because I have tried to be a part of the community and I don't even know what to say
I genuinely am speechless. I think its so sad to see HN community be unable to trust and how you accuse me man which is so sad to see.
I genuinely just don't know what to say? I hope you are (jokng or something?) but what sort ofjoke is this?
Around 1980, while taking a "Saturday Morning Class" in Toronto - I discovered that there was a lab of ~24 Commodore PET 2001 (8K - blue phosphor, chiclet keyboards) at George Brown College. Spent as much time as I could there engaging with the early hacker community who all brought their shoeboxes of 5-1/4" floppies to trade programs. It was there that I had my first OMG moment when a much older kid showed me his floppy disk catalog program that could sort so much faster than mine did (he used quicksort).
What struck me when reading it was that I parachuted into that trail with zero context. So I asked an LLM to generate some surrounding context for me: this is the verbatim prompt that I just gave gpt-5.2/medium running in my own agentic harness:
> read this: https://trails.pieterma.es/trail/tempo-gradient/. then i want you to write a short essay that shows the relationship between the different quotes making sure to use each quote in the process. the narrative should help me understand the links between the different quotes in each book in context. feel free to use your own knowledge of the books and your general knowledge to make this an approachable thing. what another llm did here is cool - it found relationships between this set of quotes. now i want you to materialize them into a compelling essay that invites curiosity and teaches me something. write the essay to tempo-gradient.md and respond in chat with how you thought about the problem and whether you thought these relationships were meaningful.
This was my agent's response:
Wrote the essay to tempo-gradient.md.
How I thought about it: I treated “tempo gradient” as the shared mechanism behind all eight excerpts—mismatch in loop speed (notice →
interpret → decide → act) caused by attention, infrastructure latency, organizational rigidity, system delays, stealthy manipulation, or
runaway acceleration. Then I ordered the quotes from “inner time” (tennis/attention) → engineered milliseconds (HFT) → explicit doctrine
(OODA) → industrial schedules (mobilization) → structural delays (Beer Game/economy) → slow covert tempo (cyber) → extreme fast tempo
(AI takeoff), so each quote sets up the next.
Whether the relationships felt meaningful: yes, mostly. They’re not “the same idea” so much as the same shape appearing in different
contexts—feedback delay and decision cadence determining who gets to set the terms of reality for everyone else. The only relationship
that feels a bit more interpretive than direct is tennis ↔ geopolitics, but even there the bridge (“how many nows” as a controllable
input to decision tempo) is surprisingly sturdy.
Totally agree with this. I have seen many cases where a dumber model gets trapped in a local minima and burns a ton of tokens to escape from it (sometimes unsuccessfully). In a toy example (30 minute agentic coding session - create a markdown -> html compiler using a subset of commonmark test suite to hill climb on), dumber models would cost $18 (at retail token prices) to complete the task. Smarter models would see the trap and take only $3 to complete the task. YMMV.
Much better to look at cost per task - and good to see some benchmarks reporting this now.
For me this is sub agent usage. If I ask Claude Code to use 1-3 subagents for a task, the 5 hour limit is gone in one or two rounds. Weekly limit shortly after. They just keep producing more and more documentation about each individual intermediate step to talk to each other no matter how I edit the sub agent definitions.
Care sharing some of your sub-agent usage? I've always intended to really make use of them, but with skills, I don't know how I'd separate these in many use cases?
Had to modify them a bit, mostly taking out the parts I didn’t want them doing instead of me. Sometimes they produced good results but mostly I found that they did just as well as the main agent while being way more verbose. A task to do a big hunt or to add a backend and frontend feature using two agents at once could result in 6-8 sizable Markdown documents.
Typically I find that just adding “act as a Senior Python engineer with experience in asyncio” or some such to be nearly as good.
They're useful for context management. I use frequently for research in a codebase, looking for specific behavior, patterns, etc. That type of thing eats a lot of context because a lot of data needs to be ingested and analyzed.
If you delegate that work to a sub-agent, it does all the heavy lifting, then passes the results to the main agent. The sub-agent's context is used for all the work, not the main agent's.
But why do social hubs need to be places of financial transactions?
I was in Delft recently and I really loved their library/community center. Full of music practice rooms, people playing board games on the ground floor, a coffee bar and it was full of people at 8pm. It is open from 9am - 11pm M-F.
You walk or cycle there (free indoor bicycle parking). There is a movie theater across the "street" (no cars).
They didn't need to be transactional spaces, they need to be spaces that attract people regularly.
The local chicken farmer who works 16 hours a day to keep his farm running isn't going out of his way three times a week to visit the community center for board game night.
He's definitely in the local Tractor Supply store three times a week though...
It's about creating community where people naturally gather, not creating a gathering space then hoping people show up.
DAVID BRANCACCIO: There's a little sweet moment, I've got to say, in a very intense book — your latest — in which you're heading out the door and your wife says what are you doing? I think you say — I'm getting — I'm going to buy an envelope.
KURT VONNEGUT: Yeah.
DAVID BRANCACCIO: What happens then?
KURT VONNEGUT: Oh, she says well, you're not a poor man. You know, why don't you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope.
I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know…
And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.
It is the computer people. The 'owning' people were computer people that founded these companies. Its computer people that sit in these meetings and go through designing these changes and building these systems. I'm a privacy person and yesterday my boss asked me to pull up records on a users device to see a record of what they were doing and I did it even though its disgusting micromanagement that I hate.
I don’t think it’s about being “places of financial transactions” so much as it’s about places of shared necessity. Everyone has to eat, so everyone goes to the grocery store.
Community centers are great and I’m not going to argue against having “non-commercial recreation”, but the thing about having local stores as social hubs is they might be the only universally shared place of a community. Not everyone is going to want (or be able!) to visit a library, but everyone does need food and other consumables/goods.
The "shared necessity" factor also means that you regularly meet acquaintances there by accident. It just doesn't happen at the Wal-Mart or Home-Depot 15 miles away anywhere near as often as it would happen at the town general store or the local main street shopping district. Possibly because nobody actually spends time at a big box store or strip mall; they're such deeply unpleasant spaces that you basically just do what absolutely must be done and get out. So now a little extra stroll around to window shop has been replaced by extra time in the car to drive 15 miles across town in the other direction to go to some other big box store.
It's not just a small towns thing, either. The main street shopping district I had in mind just now is in the middle of Chicago. And it doesn't happen so much there, either, anymore, in the post retail apocalypse era. Now it's all bars and restaurants so people go there for a very reduced range of reasons.
I would say that "don't let perfect be the enemy of the good" here. Would universal be better? Sure. But what I saw is so much better than what we currently have here in the US.
The point is that OPEN (the name of the Delft library) is really a community center and not a library. Yes, it happens to have books. But it also has a stage for musical performances, art rooms, tables, wifi, washrooms, coffee. I would say that the only thing that is missing is a gym; there are small dance rooms in there but that's not quite the same.
But the essence here is walkable communities. Suburbs and exurbs are hostile to even small local stores because you have to drive everywhere to do anything. There is no community in visiting my Costco or even my QFC.
one thing that i find works really well is to ask it to research things in the codebase and write a plan first. codex with gpt-5 is exceedingly good at doing this. then ask it to write a plan for what it would do with that information, i.e., i want you to research codebase for <goal>. then write a plan for how you would achieve <goal> given what you have learned.
Claude writes out plans and all that, it's good about that.
Sure would be great if ai agents could learn from conversations. That would really makes things better. I tell Claude to capture things in the Claude.md file, but I have to manually tend to that quite a lot.
That is an artifact of implementation. You can absolutely implement it using strict FP. But even if not, any given implementation will still do things in a specific order which can be documented. And then if you're running quantized (including KV cache), there's a lot less floating point involved.
I used to own COM at Microsoft; I think that MCP is the current re-instantiation of the ideas from COM and that English is now the new scripting language.
Did you use Claude Code to write the post? I'm finding that I'm using it for 100% of my own writing because agentic editing of markdown files is so good (and miles better than what you get with claude.ai artifacts or chatgpt.com canvas). This is how you can do things like merge deep research or other files into the doc that you are writing.
Right. But you can copy paste that into a separate doc and have Claude Code merge it in (and not a literal merge - a semantic merge "integrate relevant parts of this research into this doc"). This is super powerful - try it!
The models are the same, but the actual prompts sent to the model are likely somewhat different because of the agentic loop - so I would imagine (without having done the experiments) there will be slight differences. Unclear whether they will be more or less than the variance in responses sent multiple times to the same experience (e.g., Claude.ai variance vs. Claude Code variance vs. variance between Claude.ai and Claude Code). Would be an interesting controlled experiment to try!