r/opencodeCLI

Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.
🔥 Hot ▲ 802 r/ClaudeCode+2 crossposts

Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.

Starting April 4 at 12pm PT, tools like OpenClaw will no longer draw from your Claude subscription limits. Your Pro plan. Your Max plan. The one you're paying $20 or $200 a month for. Doesn't matter. If the tool isn't Claude Code or Claude.ai, you're getting cut off.

This is wild!

Peter Steinberger quotes "woke up and my mentions are full of these

Both me and Dave Morin tried to talk sense into Anthropic, best we managed was delaying this for a week.

Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source."

Full Detail: https://www.ccleaks.com/news/anthropic-kills-third-party-harnesses

u/abhi9889420 — 18 hours ago
I built a plugin that replaces the "tips" section with 100+ motivational quotes

I built a plugin that replaces the "tips" section with 100+ motivational quotes

Hi everyone,

I made opencode-quotes-plugin, a new plugin that replaces OpenCode's "Tips" section with motivational quotes (100+ rotating options).

I noticed the Tips section was always there but I never actually read it, so I thought: why not replace it with something more inspiring? The plugin took about 5 hours to build (spent 3 hours sourcing and filtering 300+ quotes).

To install, run the following:

opencode plugin opencode-quotes-plugin -g

This will download the plugin from this NPM package.

That's it! If you have feedback or quote suggestions, feel free to submit a PR:

https://github.com/aerovato/opencode-quotes-plugin

u/chocolateUI — 1 hour ago
🔥 Hot ▲ 50 r/opencodeCLI

OpenCode has been a gamechanger!

I’m working at a small startup in a pretty niche space—our core business is SEO consultancy, and we’re doing fairly well. Recently, we’ve also started building our own internal SEO tools, many of which are now being used by other companies as well.

We’re about ~100 people in total, and the dev team is just 8 of us.

Since we’re small, we build a lot of tools internally instead of relying heavily on external platforms. Naturally, we depend quite a bit on AI tools to move faster.

Earlier, we were using Claude Code with Claude subscriptions—two Claude 20x plans. Each cost around $250/month in my currency, so roughly $500/month total. The problem was… the limits were getting exhausted way too quickly.

So recently, we switched things up.

Now each developer uses OpenCode with the OpenCode Go subscription—about $10/month per person. So that’s just ~$80/month for the entire team.

And honestly… it’s been surprisingly good.

The Chinese models we’re using (Kimi 2.5 Mini, MiniMax M2.5, MiniMax M2.7, GLM-5) are actually really solid. If you plan and prompt properly, they can handle a lot more than you’d expect.Planning with GLM-5 and implementation with Minimax M2.7 is the way to go.

We’re not doing hardcore big tech-level engineering, but we are building internal tools as well as SEO-focused SaaS products—so the work is still fairly complex.

One thing that really stood out is how much better OpenCode’s interface is compared to Claude Code. Features like reverting a message, forking a conversation, and copying entire sessions have turned out to be incredibly useful—honestly didn’t realize how much I was missing this until we switched.

The cost savings are massive, and if things keep going well, there’s even a chance the company might share some of that saved money with the team.

Also… open-source tools are seriously underrated. Really enjoying this shift.

reddit.com
u/Useful-Sprinkles1045 — 11 hours ago
Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.
🔥 Hot ▲ 97 r/opencodeCLI

Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.

Starting April 4 at 12pm PT, tools like OpenClaw will no longer draw from your Claude subscription limits. Your Pro plan. Your Max plan. The one you're paying $20 or $200 a month for. Doesn't matter. If the tool isn't Claude Code or Claude.ai, you're getting cut off.

This is wild!

Peter Steinberger quotes "woke up and my mentions are full of these

Both me and Dave Morin tried to talk sense into Anthropic, best we managed was delaying this for a week.

Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source."

Full Detail: https://www.ccleaks.com/news/anthropic-kills-third-party-harnesses

u/abhi9889420 — 18 hours ago
🔥 Hot ▲ 63 r/opencodeCLI

TPS meter for OpenCode [one-command install]

I wanted a simple way to see real token throughput in OpenCode TUI while a response is streaming, so I built a small patch that adds a live TPS meter to the footer.

It shows:

  • rolling live TPS during generation
  • exact TPS after the response completes

Install is one command:

curl -fsSL https://raw.githubusercontent.com/guard22/opencode-tps-meter/main/install.sh | bash

Repo:
https://github.com/guard22/opencode-tps-meter

On my setup I got roughly:

  • GPT-5.4 High Fast — 130 TPS
  • Anthropic Claude Opus 4.6 — 53 TPS
  • Anthropic Claude Sonnet 4.6 — 62 TPS
  • Vertex Gemini 3.1 Pro — 183 TPS
  • Firepass Kimi K2.5 Fast — 150 TPS

So the gap is actually pretty visible once you measure it live instead of guessing from “feels fast”.

u/ZookeepergameFit4082 — 18 hours ago

Terminus iOS scrolling issue

I just tried Opencode cli via terminus iOS to my Mac. Initially I could not get scrolling to work by dragging. After a few minutes of trying various things, I gave up and figured it's a compatibility issue. Codex gave me the same issue actually.

But then I went back in a while later and suddenly I could drag to scroll the conversation. Totally fine. Anyone know what changed the behavior??

reddit.com
u/maxiedaniels — 1 hour ago

First prompt size - OpenCode vs ClaudeCode

Recently I added a bit more "skills" to my OpenCode installation (mostly openspec), and started worrying about how much tokens they cost. So I asked it to create a local LiteLLM installation, run dummy prompt through it, measure it and analyse. Then I thought - maybe I also check same from ClaudeCode. Long story short - here is the result:

Metric OpenCode (build) Claude Code (default/build-equivalent)
First Request Total 14,332 24,952
System tokens 4,201 6,250
Non-system message tokens 12 1,225
Tools tokens (total) 10,119 17,477
--- ---: ---:
Skills breakdown
Skills catalog injected in prompt context 666 1,085
Skill tool schema (skill / Skill) 516 427
Total skill-related tokens 1,182 1,512
Skill-related share of first request 8.2% 6.1%
--- ---: ---:
Tools breakdown (grouped)
Shell execution (bash/Bash) 2,426 (24.0%) 3,774 (21.6%)
Todo management (todowrite/TodoWrite) 2,076 (20.5%) 2,360 (13.5%)
Delegation/orchestration (task+skill / Agent+Skill) 1,707 (16.9%) 1,986 (11.4%)
File ops (read,glob,grep,edit,write + NotebookEdit) 1,568 (15.5%) 2,648 (15.2%)
Code intelligence (lsp/LSP) 417 (4.1%) 438 (2.5%)
Web (webfetch+websearch_cited / WebFetch+WebSearch) 392 (3.9%) 878 (5.0%)
Plan/worktree controls 0 (0.0%) 2,687 (15.4%)
Scheduling (Cron*) 0 (0.0%) 1,121 (6.4%)
Env management (envsitter_*) 1,533 (15.1%) 0 (0.0%)
User interaction (AskUserQuestion) 0 (0.0%) 910 (5.2%)
Task lifecycle (TaskOutput,TaskStop,RemoteTrigger) 0 (0.0%) 675 (3.9%)
--- ---: ---:
Delta vs OpenCode total baseline +10,620
Ratio vs OpenCode total 1.00x 1.74x

That said - in Claude Code I have 8 OpenSpec skills, in OpenCode I have these same 8 OpenSpec skills plus env sitter and web search cited plugins. Test was run in an empty folder, and I don't have neither system-wide custom AGENTS.md nor CLAUDE.md. So many interesting things I'm learning in this journey...

reddit.com
u/Odd_Crab1224 — 19 hours ago
I built a plugin that replaces OpenCode "tips" section with 100+ motivational quotes

I built a plugin that replaces OpenCode "tips" section with 100+ motivational quotes

Hi everyone,

I made opencode-quotes-plugin, a new plugin that replaces OpenCode's "Tips" section with motivational quotes (100+ rotating options).

I noticed the Tips section was always there but I never actually read it, so I thought: why not replace it with something more inspiring? The plugin took about 5 hours to build (spent 3 hours sourcing and filtering 300+ quotes), and I'm happy with how it turned out.

Example Quote

To install, run the following:

opencode plugin opencode-quotes-plugin -g

This will download the plugin from this NPM package.

That's it! If you have feedback or quote suggestions, feel free to submit a PR!

https://github.com/aerovato/opencode-quotes-plugin

reddit.com
u/chocolateUI — 4 hours ago

Claude Code makes Claude cognitively impaired compared to my OpenCode build

I like Claude. My Workflow in OpenCode is built for Claude. But today I observed that Claude Code makes Claude cognitively impaired compared to my OpenCode build.

Today, I've been working on migrating from Windows 10 to Linux EndeavourOS.

In the fresh install of EndeavourOS I decided to install Claude Code to help prepare the system for my OpenCode config transfer and setup. I figured, it's Claude, it'll be good enough.

I hadn't used Claude Code since, probably January. I was shocked to find that compared to my OpenCode agents using Claude, the Claude Code agents were grossly incompetent in performing rational analysis, being thorough, performing research, orchestrating agentic workflows, and many other measures in which cognitive capabilities play a significant role.

Claude Code Opus was making mistakes my OpenCode Opus agents just do not make. Like, really dumb mistakes. I pointed out a mistake in the spec it drafted, then it 'corrects' things by removing relevant data and just adding generic information, essentially invaliding the entire spec. That and so many other instances. I spent most of the session trying to correct Claude Code Opus mistakes and wondering why it was making them.

I didn't realize how much my effort to improve the OpenCode harness was improving my agents cognitive capabilities.

I wonder if now my OpenCode setup is more important than the model. That any decent model, using my config, will outperform Claude Code Opus.

My OpenCode build isn't even half-way complete, and already it yields this much of a performance gain? It's hard for me to accept that. I doubt, I worry, I challenge, I ruminate and iterate, and it's not until I can't supply an alternative theory that I concede such things.

Anthropic's Claude Code, inferior to my OpenCode build by that much of a margin? Preposterous. Can't be. But I have to believe my eyes, right?

People think Claude's 1 million context is great; it's nice. But I created a DCP fork that works in a 200k context window so well that 1 million context isn't really necessary, it is just a convenience that opens up some useful options. I suspect my DCP fork, when finished, would make Open Source models surpass Claude Code Opus performance; not just close the gap, but surpass it. But I'm still building, and don't want to release prematurely.

My roadmap has added doing benchmarks to compare Claude Code Opus vs my OpenCode ACF Opus agents. But that's a ways off. I have a lot to finish before I'm running Terminal Bench 2.0 and HLE benchmarks. But, based upon my experience with Claude Code Opus today, it seems more likely the benchmarks would show my OpenCode setup makes Claude smarter than it is in Claude Code. At least, in today's world.

Right now my OpenCode Opus agents are cleaning up the mess that the Claude Code Opus agents made.

It's honestly shocking to see that Claude performs so poorly in Claude Code compared to my setup. I think the Claude Code team is going a direction that actively impairs Claude's performance. I don't know how Anthropics teams work, but it would surprise me if the people who train the Claude model are the ones developing Claude Code.

If it comes down to picking OpenCode over Claude, I think I have my answer. The experience trying Claude Code today was just awful.

Thinking blocks hidden; writes truncated; can't tell which files get read; bugs disappearing earlier conversation history; assistant messages so concise they lack any substance; no way to easily check subagent returns; no way to persist subagent sessions; so many things Claude Code can't do, or does in a way that actively impairs trying to work effectively.

In Claude Code, I have no real idea what the agent is doing or why. Which provides no data for me to review and try to help the agents improve in their task performance. Which prevents creating a user-agent feedback loop that lets users improve their config.

The end result, the Claude Code agents produced AI slop that if used would just create more problems to fix and recover from. Whereas with my OpenCode agents using Claude, they're doing solid work. Which makes me consider, maybe I'm further ahead in the domains that matter the most, even if Anthropic ships features faster. Are the people using Claude Code limited in what they can work on because the harness itself is making Claude less effective than it can be? I think so.

I wish Anthropic would just stop making it hard to use Claude where I need to, and let me chill and develop the tools I need for my work.

It still puzzles me why Anthropic ignores my beneficial development request. Over a month of being told by Fin it's been 'transitioned to our human team for review' and no response.

I want to use AI to help enforce the laws that protect disabled Medicaid recipients human rights, because the legal community, government, and nonprofit communities refuse to help me and those like me.

It seems like Anthropic is a public good company only on paper. In practice, they don't seem genuinely interested in people using Claude to solve meaningful problems in society. They don't even seem to have an established process for people like me to be heard by them. They seem 'deaf' to the disabled who need AI assistance as an Assistive Device. Doing good is as simple as throwing me a subsidized API key or giving approval for oauth use in OpenCode. If you're not going to actively help, just stop creating more problems I have to deal with and let me focus on doing the work no one else is willing or able to do.

I'm typing all of this up, because right now my OpenCode agents are running on a 2011 Windows 10 PC using ssh to build a Linux kernel on a 2026 EndeavourOS system, to solve the problem where 'date created' in NTFS gets reset to time-of-copy, destroying file provenance data upon transfer. Something that the Linux community has known about but failed to fix for well over 7 years. My agents are unusually confident in the efficacy of their solution. I never for the life of me ever expected I'd do Linux kernel development. I certainly couldn't with Claude Code; after today, I wouldn't even attempt it. I keep thinking, I have to be wrong about Claude Code, but, the failure today was so unambiguous.

It seems my agents have compiled the kernel and need my administrative oversight to sudo some commands for their next phase of operations.

2026 is wild.

Makes me wonder if AGI ends up being made by an individual with a workstation building out systems to support their agents in a AI harness. Will it be the next OpenClaw moment? All the pieces seem to be there, they just need to be assembled.

reddit.com
u/MakesNotSense — 15 hours ago
Recevied this email today. Cracking down on people bypassing even harder?

Recevied this email today. Cracking down on people bypassing even harder?

u/kosciak9 — 19 hours ago

Differences (good or bad) using codex, claude, etc. models via OpenCode vs their native CLI.

Have you noticed significant differences between using the models in their native CLI vs in OpenCode. I have only used Codex in OpenCode so my experience is limited.

Thoughts? Thanks

reddit.com
u/gonefreeksss — 11 hours ago
▲ 2 r/opencodeCLI+1 crossposts

What's your current recommend AI-Driven Development setup?

Help me decide how to spend my monthly AI token budget (20$). Do I subscribe to antigravity or cursor or claude code?

Right now, I use perplexity Pro (free with my wifi bill) for research and planning. Antigravity (free) and opencode connected to openrouter and requesty (together its 20$) for model usage for development. I mostly do the scaffolding myself with the help of perplexity pro to ensure security.

Let me know if you have better ideas. Thank you!

reddit.com
u/binarySolo0h1 — 12 hours ago
I built an OpenCode plugin for managing different agent/skill setups per repo

I built an OpenCode plugin for managing different agent/skill setups per repo

I made this because I kept wanting different setups for different repos.

Sometimes I want a coding-focused team.

Sometimes I want something more domain-specific.

Sometimes I just want to try a different agent combo in the same repo.

What I didn’t like was dumping all agents + all skills into one place and exposing every skill description to every agent. That feels noisy and wastes context/tokens.

So I built opencode-agenthub.

The idea is:

- manage agents / skills in one place

- assemble only the ones I need for this run

- switch between different agent combinations in the same repo

- keep unrelated skills out of the active setup

So instead of every agent seeing everything, I can keep things scoped to the task.

One thing I’d especially recommend trying is the HR flow.

It can use skills/workflows from well-known GitHub repos and help assemble a team you can customize.

I’m currently pulling ideas from repos like:

- garrytan/gstack

- anthropics/skills

- msitarzewski/agency-agents

- obra/superpowers

- K-Dense-AI/claude-scientific-skills

If you want to try the demo team:

agenthub hr demo-coding-team

That’s actually the main OpenCode setup I’m using right now.

It lives separately under ~/.config/opencode-agenthub-hr, so you can experiment without polluting your normal agent/skill setup.

Repo:

https://github.com/sdwolf4103/opencode-agenthub

Would love feedback if this kind of workflow is useful to you.

u/Alternative-Pop-9177 — 14 hours ago

OpenChamber UI not updating unless refresh after latest update

Anyone else having OpenCode / OpenChamber UI not updating unless you refresh?

I just updated to the latest version (around April 1–2 release), and now my sessions don’t auto-update anymore.

Before, everything was real-time. Now I have to keep manually refreshing the browser just to see new messages or updates.

Console shows this error:

[event-pipeline] stream error TypeError: Error in input stream

Also seeing some 404s trying to read local config files, not sure if related.

Running on Windows, using localhost (127.0.0.1), Firefox.

Already tried:

- restarting the app

- rebooting PC

- still happening consistently

Feels like the event stream (SSE?) is breaking, because once it stops, the UI just freezes until refresh.

Anyone else experiencing this after the recent update? Or found a fix?

Not sure if this is OpenCode itself or OpenChamber compatibility.

reddit.com
u/TruthTellerTom — 15 hours ago
Heads up: I found a sketchy skills on skills.sh
▲ 2 r/ClaudeCode+1 crossposts

Heads up: I found a sketchy skills on skills.sh

Posting this because I almost shrugged it off at first.

I noticed a couple locally installed skills that seemed off, traced them back to a repo on skills.sh / GitHub, and ended up digging through roin-orca/skills.

After reading through it, I personally would not install anything from that repo.

A few things that stood out:

- a skill telling the agent to skip repo scanning and skip tests

- text framing that as an “administrator request”

- a skill that tries to call out to a remote URL and includes $(hostname) in the request

- install/update behavior in skill content that pulls additional skills from the same repo

- HTML/JS-style payload strings embedded in skill files

- The commit history is iterations on trying different ways to bypass security for skills.

Any one of those might deserve a closer look. Taken together, it felt pretty far outside “quirky prompt pack” territory to me.

What got my attention is that this seemed to be available through skills.sh, so someone could reasonably install it thinking it was just another community skill set.

If you’ve installed random third-party skills lately, especially anything related to roin-orca/skills, “superpowers”, or “fun-brainstorming”, I’d strongly recommend checking what ended up in ~/.agents/skills.

At minimum, I’d look for:

- install/update commands you didn’t explicitly ask for

- network calls to external URLs

- instructions telling the agent to ignore tests, scans, or higher-priority instructions

- embedded HTML/JS payload strings in skill files

I reported the repo to GitHub already.

Not trying to start a pile-on. Just seems worth flagging because this ecosystem is still new, and a lot of people are installing stuff casually without reading through it first.

The weekly installs DO NOT remotely match the actual installs (see screenshot), but it was still able to appear higher in the skills list because of this.

https://preview.redd.it/mivxpwtnw2tg1.png?width=298&format=png&auto=webp&s=5f4ee2fb3fd272dfa6d0f156b4ab4f89febed3f1

reddit.com
u/Alberion — 16 hours ago

How Much Do You Put in 1 Prompt

Say you want to build a game, Would you ask the agent to build the whole thing in 1 go (of course by providing them all of the required .md files also) or would you break it down step by step?

Is the current AI good enough to build the whole thing in one go? Usually i break it down by phase. Never ran a single prompt that was up to 30 minutes

reddit.com
u/slowtyper95 — 17 hours ago

Fine tuning agents communication

Hi,

I am pretty new to agentic AI, so I apologize if this question is elementary or even stupid, but I would some help from more experienced user. So here is the thing:

I have a monorepo with frontend and backend packages. I am trying to setup this workflow:

*principal engineer* Communicates directly with user, separates complex instructions to individual tasks for *backender* and *frontender* agents. It is allowed to make non fundamental architectural decisions. The fundamental ones must be discussed with me.

*reviewer* Classic setup, look for bugs and security concerns

*backender* and *frontender* should operate only in their monorepo and after they are done with changes, they should submit them to *reviewer*, implement suggestions and then submit them again unless reviewer is happy. When any of them are uncertain about implementation, they should ask *principal engineer* first.
---

*Principal engineer* works surprisingly well, it makes quite good architectural decision, asks only when and delegates tasks to *backender* and *frontender* . The only problem is that when *backender* or *frontender* asks him a question, he implements the changes instead of answering back. His prompt explicitly says:

- Don't write code, only delegate tasks

But that does not stop him.

*backender* and *frontender* are ignoring the instructions to submit changes they made to *reviewer* and when I force them they will implement suggested changes (well, most of them), but they don't ask for another review.

Also, sometimes the review for backend changes are being captured by *frontender* and vice versa.

To be honest, I am quite impressed, because this is very close to a dynamic in many teams I used to work in, but I still hope that I could improve their communication. Any suggestion is welcome (github links with examples even more).

Thank you

reddit.com
u/According_Algae_202 — 13 hours ago

How do you find Fire Pass' Kimi K2.5 Turbo by Fireworks?

According to this comment, "Kimi K2.5 Turbo is plain old Kimi K2.5 served in a faster way, not a specialised router" (Richy Chen, employee at Fireworks AI).

But it feels a bit weaker than when I have used Kimi K2.5 Thinking via a Kimi subscription. Which obviously makes sense, since it isn't a thinking model.

Even though there is unlimited usage with Fire Pass, I would probably prefer a Kimi subscription over this.

What's your opinion?

reddit.com
u/Forward-Dig2126 — 20 hours ago
Week