r/opencode

▲ 218 r/opencode

20$ chatgpt plus.

5/10$ opencode go.

Planner GPT 5.5

Research deepseek v4 flash

Coder deepseek v4 flash

Reviewing GPT 5.5

System prompts for each matter. I tried a few versions and now I'm doing progress in a day that I was doing in a month.

I don't know about UI but when it comes to building any logical software, it can literally do anything.

Also it's obvious but GitHub is your best friend. For anything you're building always look for anyone that built something that can help you.

The open source market is thriving. AI is thriving. It's time to make money before it's over.

reddit.com
u/Funny-Advertising238 — 12 days ago

Any good guides to really get into more advanced features of Opencode?

Hi all,

I have really been getting into the vibe coding hype and I am impressed by Opencode so far. But I am using it just in very basic ways - using Tab to switch between Plan and Build, asking it questions, etc. The very basic obvious stuff. Do you have any tips on becoming a 'pro' user? What features should I look into, or plugins, which will make me feel 'wow this is even better'? Any online guides?

reddit.com
u/ammen99 — 3 days ago
▲ 40 r/opencode+1 crossposts

I was using claude code with pro membership and always came to my limits. Now with opencode go and deepseek v4 flash I have the same results and it's much cheaper. Genuine question: Am I missing something?

reddit.com
u/cocouz — 7 days ago

Free OpenCode models

I started using opencode 2 days ago and love(d) it at first. It's acrtually pretty good and some of the models seem to output some good content if you repeatedly improve and reiterate the code and product you're working on. For every model so far though (MiniMax 2.5, Big Pickle, Hy3, Nemotron, and a few others I've forgotten), i've hit a limit. Big Pickel was the most generous but that also gave up on me this afternoon.

After the 12hr wait/reset period, it asks me to pay for Go to continue using it, regardless.

Are there any true free models with OpenCode that are capable coding models and that remain free for the duration of the use? If yes - how are they added and used in opencode? Or is the answer no, and that after a short while, everyone ends up paying and it's not really open/free after all?

reddit.com
u/i-dm — 4 days ago
▲ 60 r/opencode+17 crossposts

I’m a 22-year-old Computer Science student, and over the last period I built an open-source project called CTX.

GitHub Repository

The idea came from a problem I kept seeing while using coding agents (like claude, codex etc.):

they are powerful, but they waste a lot of context on the wrong things.

They keep re-reading giant AGENTS.md files, noisy logs, broad diffs, too much repo structure, and too much repeated project guidance.

So even when the model is good, a lot of the prompt budget is spent on context bloat instead of actual problem-solving.

That’s why I built CTX.

What CTX is

CTX is a local-first context runtime for coding agents, designed especially for OpenCode (for now).

It does not replace the model or the coding agent.

Instead, it sits underneath and helps the agent work with:

  • graph memory for project rules and guidance
  • compact task-specific context packs
  • retrieval over code, symbols, snippets, and memory
  • log pruning to surface root causes faster
  • local MCP integration
  • local-only stats and audit trails

So instead of repeatedly dumping full markdown instructions and huge logs into the prompt, CTX helps the host retrieve only the smallest useful slice for the current task.

Why I made it

I wanted something that makes coding agents feel less noisy and more deliberate.

The goal was:

  • less prompt waste
  • less manual context wrangling
  • better retrieval of actually relevant project knowledge
  • better debugging signal from noisy test output
  • a workflow that feels native inside OpenCode

How it works

The flow is intentionally simple:

  1. install ctx
  2. go into your repo
  3. run:
ctx init
ctx index
ctx opencode install
opencode

Then inside OpenCode you can use commands like:

/ctx  #Opens the CTX command center inside OpenCode.
/ctx-doctor  #Checks whether CTX, MCP, and the repo setup are working correctly.
/ctx-memory-bootstrap  #Imports project guidance files into graph memory for targeted retrieval.
/ctx-memory-search  #Searches stored project rules and directives by topic or keyword.
/ctx-retrieve  #Finds the most relevant code, symbols, snippets, and memory for a task.
/ctx-pack  #Builds a compact task-specific context pack for the current problem.
/ctx-prune-logs  #Condenses noisy command output into the most useful failure signal.
/ctx-stats  #Shows local usage stats and context-efficiency metrics.

So the daily workflow stays inside OpenCode, while CTX handles the local context layer.

Results so far

On the included benchmark fixture, CTX graph memory reduced rule-token usage by 56.72% while keeping full query coverage and improving answer quality.

I also added a public external benchmark on agentsmd/agents.md, where CTX showed 72.62% token reduction.

The point is not “magic AI gains”, but a more efficient and less wasteful way to feed context to coding agents.

Why you might care

You might find CTX useful if:

you use OpenCode a lot you work on repos with a lot of project rules/docs you’re tired of stuffing huge markdown files into prompts you want better local retrieval and cleaner debugging context you prefer local-first tooling instead of remote prompt glue

Current status

The project is already usable, tested, and documented.

Right now the prebuilt release archive is available for macOS Apple Silicon, while other platforms can install from source.

It’s fully open source, and I’m very open to:

  • feedback
  • suggestions
  • bug reports
  • architectural criticism
  • ideas for making it more useful in real workflows

If you try it, I’d genuinely love to know what feels useful and what feels unnecessary.

Repo again: https://github.com/Alegau03/CTX

u/Public-Cancel6760 — 6 days ago
▲ 7 r/opencode+1 crossposts

Cheapest Setup For Oh My Open Agent. OpenCode GO

Install this

https://github.com/code-yeongyu/oh-my-openagent

Open this

%userprofile%\.config\opencode\oh-my-openagent.json

Replace with this

>{
  "$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",
  "agents": {
"sisyphus": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/kimi-k2.5" },
{ "model": "opencode-go/qwen3.6-plus" },
{ "model": "opencode-go/glm-5" }
]
},
"hephaestus": {
"model": "opencode-go/kimi-k2.6",
"fallback_models": [
{ "model": "opencode-go/deepseek-v4-pro" },
{ "model": "opencode-go/qwen3.6-plus" }
]
},
"oracle": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/deepseek-v4-pro" },
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/glm-5" }
]
},
"librarian": {
"model": "opencode-go/minimax-m2.7",
"fallback_models": [
{ "model": "opencode-go/minimax-m2.5" },
{ "model": "opencode-go/mimo-v2.5" }
]
},
"explore": {
"model": "opencode-go/mimo-v2.5",
"fallback_models": [
{ "model": "opencode-go/minimax-m2.7" },
{ "model": "opencode-go/minimax-m2.5" }
]
},
"multimodal-looker": {
"model": "opencode-go/kimi-k2.6",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.5" },
{ "model": "opencode-go/minimax-m2.7" }
]
},
"prometheus": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/deepseek-v4-pro" },
{ "model": "opencode-go/glm-5" }
]
},
"metis": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/glm-5" }
]
},
"momus": {
"model": "opencode-go/deepseek-v4-pro",
"fallback_models": [
{ "model": "opencode-go/glm-5.1" },
{ "model": "opencode-go/kimi-k2.6" }
]
},
"atlas": {
"model": "opencode-go/kimi-k2.6",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.5" },
{ "model": "opencode-go/minimax-m2.7" }
]
},
"sisyphus-junior": {
"model": "opencode-go/kimi-k2.5",
"fallback_models": [
{ "model": "opencode-go/qwen3.5-plus" },
{ "model": "opencode-go/minimax-m2.7" },
{ "model": "opencode-go/glm-5" }
]
}
  },
  "categories": {
"visual-engineering": {
"model": "opencode-go/kimi-k2.6",
"fallback_models": [
{ "model": "opencode-go/glm-5.1" },
{ "model": "opencode-go/kimi-k2.5" }
]
},
"ultrabrain": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/deepseek-v4-pro" },
{ "model": "opencode-go/kimi-k2.6" }
]
},
"deep": {
"model": "opencode-go/kimi-k2.6",
"fallback_models": [
{ "model": "opencode-go/deepseek-v4-pro" },
{ "model": "opencode-go/glm-5.1" }
]
},
"artistry": {
"model": "opencode-go/deepseek-v4-pro",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/glm-5.1" }
]
},
"quick": {
"model": "opencode-go/mimo-v2.5",
"fallback_models": [
{ "model": "opencode-go/minimax-m2.7" },
{ "model": "opencode-go/minimax-m2.5" }
]
},
"unspecified-low": {
"model": "opencode-go/kimi-k2.5",
"fallback_models": [
{ "model": "opencode-go/qwen3.5-plus" },
{ "model": "opencode-go/minimax-m2.7" }
]
},
"unspecified-high": {
"model": "opencode-go/glm-5.1",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/deepseek-v4-pro" }
]
},
"writing": {
"model": "opencode-go/deepseek-v4-pro",
"fallback_models": [
{ "model": "opencode-go/kimi-k2.6" },
{ "model": "opencode-go/glm-5.1" }
]
}
  }
}

Modify if you want. But it works great.

Tips if you want to use Images to the prompt, use Kimi K2 2.5 or newer.

reddit.com
u/Rude_Step — 4 days ago

Production Level Software by AI

So I have been curious for a while, apart from claude-code and codex teams (that have a direct vested interest in claiming AI is building production software), who is actually building production level software or products with AI

Also to clarify, I am talking about actual products being used at scale and not interesting MVPs and PoCs (which have infinitely flooded GitHub)

If you have built a product or tool or anything with AI that is being regularly used by users, drop a link below, I am genuinely curious.

reddit.com
u/TonightOk5378 — 4 days ago
▲ 20 r/opencode+2 crossposts

Like a lot of other people, I’ve been affected by the new GitHub Copilot limits. I’m currently on the student plan (so I’m not complaining about that part), but the restrictions have basically made it unusable for me.

Right now I’m trying to find alternative models/providers to use with OpenCode. It needs to be very cheap (yes, I’m actually a student) but still have decent functionality for most coding tasks. I’ll mainly be using it for relatively small C++ projects.

I’ve tried OpenRouter, but I’ve read it can get expensive, and I’m still pretty new to AI tokens and all of that, so I don’t really want to end up with a huge bill. It also has a massive selection of models, which makes it hard to know what to actually choose. I’ve seen people mention DeepSeek or Kimi, but even those models have variations that I know nothing about.

Im fine with selfhosting models, but im not sure how well I can do with a Ryzen 7600, 32gb ram and a 4060ti.

On a side note, I’m also considering using the OpenCode desktop app since it seems simpler to use, but I’m a bit worried it might not play nicely with VS Code plugins like Remote SSH and PlatformIO. From what I understand, it won’t have the same level of integration that GitHub Copilot has in VScode.

reddit.com
u/Mountain-Ad1044 — 10 days ago
▲ 67 r/opencode+2 crossposts

I made a simple Bash script to install and update OpenCode on Termux for Android.

It uses the reliable build system from [Hope2333/opencode-termux](https://github.com/Hope2333/opencode-termux) and supports two main functions:

- **Full Fresh Installation** – sets up all dependencies (glibc, openssl, ripgrep, etc.) and builds the latest OpenCode from source.

- **Quick Update** – fetches the newest upstream release via GitHub API and automatically upgrades your existing install.

Everything is menu-driven. No manual steps after running the script.

**How to use:**

```bash

chmod +x manage_opencode.sh

./manage_opencode.sh

```

u/12EDITUSs — 9 days ago
▲ 17 r/opencode+3 crossposts

Hi everyone,

I’ve been working on an OpenCode configuration for my day-to-day work:

https://github.com/grojeda/opencode-config

I usually work across different projects, stacks, and technologies, sometimes at the same time, so I wanted to create a reusable setup with agents, commands, and skills that helps me stay consistent without having to rebuild the same workflow for every repo.

It’s still relatively new and very much a work in progress. I’m still improving the structure, refining the agents, and adding more skills as I test it in real projects.

I’d really appreciate feedback from anyone who has built something similar:

  • Does the structure make sense?
  • Are the agents too broad or too specific?
  • Would you organise the skills differently?
  • Am I missing any useful patterns for multi-project / multi-stack work?
  • Any obvious mistakes or things that could burn too many tokens?

Any feedback, criticism, or examples of your own setup would be very welcome.

u/kysrno — 5 days ago
▲ 7 r/opencode+1 crossposts

Even if i choose low reasoning these models work non-stop. Is it a problem with me or these models are designed to finish whole tasks without stopping. I am trying to negotiate and get opinion, but rather than giving answers, after the model figures out, it immediately starts doing changes, running codes etc. and i dont want to say what it should really do in every pormpt or as a skill etc.,. It doesnt feel like a companion, but a pure vibe coding agent.

reddit.com
u/Honest_Night_9233 — 11 days ago

Hello everyone, I want to try my hand at designing web sites, just as an experiment, no web apps with server-side or client-side logic or anything fancy like that. I also don't want to spend money (at least for the time being), so I'm looking for an option that won't cost me anything. The entire LLM ecosystem is massive with new companies springing up like mushrooms.

Big Pickle is the default model and from what I understand it runs locally on my machine. For anything else I will need to connect to a provider or somehow run the model locally. The big models like Qwen or Kimi won't run on consumer-grade software from what I understand, and I would have to set up a local inference engine.

If I want to connect to a provider, what's a good one that won't ban me for using a 3rd-party client? Or is Big Pickle maybe good enough as it is?

My computer specs:

Operating System: Void Linux
Kernel Version: 6.18.25_1 (64-bit)
Graphics Platform: Wayland
Processors: 12 × AMD Ryzen 5 3600 6-Core Processor
Memory: 15.5 GiB of usable RAM
Graphics Processor: AMD Radeon RX 5500 XT
reddit.com
u/HiPhish — 7 days ago

Is there a way to sign up to OpenCode without a github or gmail account ?

As much as I like sharing, I'd rather just use an email/password than go through a third-party, but it seems there is no such option on their pages.

Have I missed something ?

reddit.com
u/FluffyGreyLlama — 5 days ago
▲ 10 r/opencode+1 crossposts

Hey folks,

I'm a new OpenCode user coming from the standard VS Code ecosystem. While I love the shift towards open-source, there's one feature I'm struggling to get back: AI integration over Remote SSH.

In my previous setup, Copilot worked seamlessly when I was logged into a remote server. I could ask questions about the remote codebase or get help with server-side errors on the fly.

Now that I've switched to OpenCode, I'm trying to figure out if a similar "Remote-AI" workflow is possible. Does anyone here use OpenCode for remote server management? If so, which extensions or configurations are you using to get AI assistance on those remote machines?

I'd appreciate any tips, tricks, or extension recommendations!

Ps. Becauss I'm system administrator, I'm dealing with 100+ servers and I don't want to install opencode cli all of them.

reddit.com
u/serhattsnmz — 7 days ago
▲ 19 r/opencode+2 crossposts

https://preview.redd.it/djh7svtv03zg1.png?width=1271&format=png&auto=webp&s=1514e5d55f381713eb6310359719694b50bf629e

Today I tried using DeepSeek V4 Flash for about half a day for coding through OpenChamber. I have seen a lot of praise for OpenChamber on Reddit recently, so I wanted to give it a try.

My task was fullstack, including planning and TDD for the backend and React for the frontend without unit tests.

Here are the results after half a day
Total tokens used: 58.6M
Total cost: 0.384 USD

The reason I am testing DeepSeek is that GitHub Copilot will increase its pricing next month. I am still not used to OpenChamber. The interface feels more cluttered compared to GitHub Copilot. The model reasoning output also floods the chat instead of being collapsed, which makes it harder to follow the actual answers.

DeepSeek V4 Flash sometimes does not follow my instructions. I created a custom agent called C# TDD for backend work and set it as the main agent, but sometimes it writes TDD and sometimes it does not.

I still prefer the GitHub Copilot interface, but integrating DeepSeek into Copilot is not very smooth. The context window tracking feature does not seem to work properly.

On the DeepSeek homepage, they mention thinking modes like none, high, and max, but OpenCode does not expose these variants, so I am not sure what reasoning level I am actually using.

Until now I have been using Claude Sonnet 4.6 and GPT 5.4 for programming tasks. Currently each prompt only costs one premium request. I am honestly not fully confident switching to DeepSeek yet.

What are your experiences?

reddit.com
u/Existing_Arrival_702 — 9 days ago
▲ 11 r/opencode+1 crossposts

I’ve been using OpenCode with the default agents (build and plan) pretty much as they come out of the box. Recently, someone told me I should be creating custom agents with specific models assigned to each one to avoid wasting tokens and running up costs.

I spent the last few days trying to understand this, watched some videos, read a bunch of comments from people who already use it, but honestly, it still hasn’t really clicked for me.

Could someone explain how this actually works in practice? Like, how do you structure your agents and decide which model goes where?

For context, I have a ChatGPT Pro subscription, so I have access to models like GPT-5.4 and 5.5. I was thinking about setting something up like this:

Planner → GPT-5.5 (low/medium)
Scout → MiniMax 2.5 (free)
Builder → GPT-5.4-mini (low/medium)

But I’m not sure if this makes sense or if it would actually be useful.

reddit.com
u/RevolutionaryOnion96 — 7 days ago