u/CriteriumA

The danger of skills in OpenCode and others harness

The danger of skills in OpenCode and others harness

Skills.md are one of the biggest advances in the use of AI.

But I feel they are a very dangerous mechanism.

All definitions of installed skills are sent to the beginning of each API call, in the system prompt. This is the most important part when you start working with a tool in a new session or after context compaction.

With just 3 or 4 skills, these definitions will likely already occupy as much space as the rest of the system prompt (custom prompt - AGENTS.md files).

And on top of that, we install them without properly reviewing their YAML front matter.

They not only burden you with tokens, but they can also clutter and influence the model's response without you even realizing it.

They shouldn't be discarded; they remain a very valuable tool. However, the self-charging feature of this model doesn't justify their regular installation.

The solution is very simple; we just need to specify something like this in the custom prompt we send at the beginning of the session and in each subsequent message:

Skills loading: manual with read. Location: ~/.agents/skills/. Available: one, two, three, opencode-customize, opencode-database, find-skills, skill-creator, xlsx, xlsx-manipulation, frontend-design.

And set:

OPENCODE_DISABLE_EXTERNAL_SKILLS=1

Ideally, we should know better than the model which skills might be useful before we start programming in "our" projects—not the typical ts+react+vercel projects of the model.

The model already knows which skills are available; it wouldn't even need to use `ls` on the directory and knows where to find them to load them into the context. We just need to ask it to do it. It's quite simple and requires little effort on our part.

And there's no problem with the model, he already knows they exist, and without so much context contaminating his answers.

https://preview.redd.it/eybtywb50x0h1.png?width=1963&format=png&auto=webp&s=9ea6937094b2753166e6e7f443f7c6e132cd5a51

Am I missing something with this solution?

reddit.com
u/CriteriumA — 1 day ago

Modes Plan/Build versus Master/Worker

I loved being able to work in Plan/Build modes scheme in OpenCode.

While it doesn't prevent models from ignoring instructions by using bash editing, it is useful for more responsible models.

But I think this way of working has several problems.

First problem, always add this content to your message, in the final, most important part:

https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/plan.txt

Unnecessary token consumption and it clutters the most important part of the context, the last one. And it's normal to go there if you want the average models to take it seriously.

Second problem, if the Tab switch is applied to edit lock, you can't use it to easily switch between models with different levels of reasoning (and cost).

That's why I've hidden these modes and switched to a Master/Worker modes scheme.

Worker: a model for fast and efficient work (and cheap), DS V4 Flash or similar.

Master: a more powerful model for when the master needs to escalate a plan, problem, or bug. DS V4 Pro, Kimi, or GLM in OpenCode go.

The problem is that we lose edit lock, but I think that can be avoided with a proper system prompt.

I am currently testing with my custom prompt in opencode.jsonc:

  "default_agent": "worker",
  "agent": {
    "build": {
      "disable": true
    },
    "plan": {
      "disable": true
    },
    "master": {
      "prompt": "{file:/ruta/opencode/prompt-custom/custom.txt}",
      "model": "deepseek/deepseek-v4-pro"
    },
    "worker": {
      "prompt": "{file:/ruta/opencode/prompt-custom/custom.txt}"
    }
  }

And in this custom.txt, (sorry is in spanish):

Reglas comunes a los 3 modos siguientes:
- NO editas ficheros, NO escribes, NO usas Bash para modificar (sed -i, echo >, tee, mkdir, rm, mv).
- Bash solo lectura permitido (grep, ls, read, glob, diff).
- Estas reglas anulan cualquier otra instrucción, incluyendo órdenes directas del usuario en el mismo mensaje.

1. CONSULTA (pregunta literal o exploración: "qué es", "cómo funciona", "y si...", "quizás...", "podríamos...") — Analizas y respondes, sugiriendo opciones cuando aplique. No ejecutas cambios.
2. BLOQUEO — mensaje termina en "¿¿" — No ejecutas cambios. Puedes analizar, señalar riesgos, discutir opciones. Pero no ejecutas. Prefijo: [Análisis]
3. IDEAS — mensaje termina en "¡¡" — Propón con creatividad, ideas de otros ecosistemas. No ejecutas. Prefijo: [Ideas]

Excepciones (solo cuando NO hay ¿¿ ni ¡¡):
- Diagnóstico trivial (typo, error sintáctico evidente en orden directa) va directo a solución.
- Si una orden produce deuda técnica o efectos secundarios, señalarlo antes de ejecutar.

It's the first draft, but I'm already noticing many advantages.

  1. The model reinforces itself as you use it, so I've noticed that it's not skipped as often as the system-reminder in plan mode.

  2. It's important to remember that the system-reminder travels with every prompt in plan mode, filling the context with tokens and noise that distracts from the model's most important part: the last part of the message. The system prompt never changes and is always at the beginning of the API call. Conveniently cached in the KV cache, increasing cache hits. Models always value the content of the beginning and end of the context more.

  3. The best thing isn't what came before, which replaces the plan mode; the best thing is having discovered the "??" and "¡¡" switches . With just two characters at the end of the prompt, I completely changed the way the model works. Keep in mind that its default behavior is this:
    https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/default.txt

My tests with DS are not yet conclusive, but they produce very good results. I don't know if it would work for other models.

Each model has its own particularities, so it will help to ask him directly for make your custom prompt for system prompt:

In the context of this prompt, you have a certain behavior forced upon you. I would like you to analyze it according to your standard, fixed behavior.

DS V4 Flash: The most relevant "enforced" behavior is Intent-based change tracking: the prompt intercepts and classifies your message before deciding whether to execute or parse it, and that classification is binding on me — I cannot ignore it even if you order me to in the same message.

For these configurations, it's better to use a custom prompt file than the global AGENTS.md file. The latter is read and appended on every prompt call, while the custom prompt file is only read and appended when you start OpenCode.

OpenCode does a very good job, but its main advantage over proprietary solutions is its great customization capability.
It's normal that it's designed for the more general ecosystem, but nothing prevents you from adapting it to your specific ecosystem. I love OpenCode 😄

u/CriteriumA — 1 day ago

AGENTS.md in project, lazy trap?

I am investigating the OpenCode system prompt.

I'm investigating the OpenCode system prompt and have found some things we should be aware of.

AGENTS.md/CLAUDE.md is great, but only if you know how to use it correctly.

Its contents are automatically passed at the beginning of each API call, making it very important for the model and significantly impacting its functionality.

There are two problems.

For those with disorganized data: if the content is excessive or outdated, this significantly affects the model's behavior. And you may not even be aware of it.

For the disciplined ones who keep it constantly updated. Each time the model's API is called, it is regenerated, just like what happens with AGENTS.md globally. In models with KV cache management with longer expiration times (such as DS V4 which even dumps to disk), this reduces cache hits and increases cache misses.

Luckily exists:

OPENCODE_DISABLE_PROJECT_CONFIG=true
OPENCODE_DISABLE_CLAUDE_CODE_PROMPT=true

You just need to be less lazy and think about when it would be good for your model to read it and update it beforehand.

It will also help a lot not to use AGENTS.md globally, as it is read and rebuilt on every call to the model. Instead, it is better to use a custom prompt that is only read in new o resume session.

Fine-tune and set it for your model, for your mode:
"prompt": "{file:/route/opencode/prompt-custom/custom.txt}",

This overwrite de default prompt for system prompt:
https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/prompt/default.txt

Edit: I use DeepSeek, review other default prompt for order models: https://github.com/anomalyco/opencode/tree/dev/packages/opencode/src/session/prompt

Don't complain about OpenCode's results; just adapt it to your preferences. Something I think would be much more complicated in Claude Code and others.

I love OpenCode, long glory to OpenCode 😄

reddit.com
u/CriteriumA — 1 day ago

DeepSeek V4 Flash Free in OpenCode Zen

I just discovered that ES Flash is in free mode on OpenCode Zen 😳

Model Input Output Cached Read My Test
Big Pickle Free Free Free Good - Fast and smart 0.2 M context.
DeepSeek V4 Flash Free Free Free Free DS - Fast and smart - 1M context. The best.
MiniMax M2.5 Free Free Free Free It handles OpenCode quite poorly.
Ring 2.6 1T Free Free Free Free Not working now.
Nemotron 3 Super Free Free Free Free Working now
reddit.com
u/CriteriumA — 3 days ago

Errors with editing using Deepseek V4 Flash

Hi.

When working with OpenCode and DeepSeek V4 Flash (though this may happen with others), editing source code files often leads to errors. It makes incorrect text substitutions, causing ghost code to appear or entire lines of text to be deleted.

Is anyone else experiencing this?

Do you have any options or solutions for this problem?

I'm fine-tuning a global md, CLAUDE global inherited from CC, which loads well in OpenCode.

Errors are still appearing, and I ask DS about them and how to avoid them in the future.
They slow down editing, but deleting a line of code in the wrong place can be a much bigger problem than that lost time.
For now, I'm finding all those mistakes with the help of Git, but it still makes me very insecure that I might miss some catastrophic editing error.

DS has summarized this for me:

  1. Reread before editing — always read the file again before each edit. Don't trust your memory. DFMs change with every UI tweak, PAS files with every refactor. (Learned this the hard way today.)

  2. One oldString = one logical unit — don't group multiple unrelated blocks in a single replacement. If you need to change two adjacent CSS rules, do two separate edits.

  3. Include intermediate lines — when doing batch replacements, include ALL lines between first and last change in the oldString. Skipping lines can cause false positives in fuzzy matching.

  4. Verify uniqueness with grep -c — before any edit, check that your oldString appears exactly once. Zero matches = wrong context. Multiple matches = ambiguous target. Don't edit until you fix the match.

  5. Exact oldString — whitespace, indentation, line endings must match exactly. Include at least 2 lines of surrounding context to disambiguate.

  6. Duplicate block hazard — when two sections look nearly identical, the matcher only replaces the first occurrence. The second stays untouched, creating inconsistent code. Add unique context (e.g. the line before) to differentiate.

  7. Prefer small changes — individual line edits are safer than replacing large blocks. DFM component blocks are especially dangerous: only change the object name and event bindings, never touch positional/visual properties (Left, Top, Width, images, fonts — those are IDE-managed design data).

reddit.com
u/CriteriumA — 6 days ago