r/MistralVibe

Claude Code skill that delegates coding tasks to Mistral Vibe, saves ~2-4x on tokens, with mistral tokens at least 50% cheaper, and avoid hitting usage limits
▲ 112 r/MistralVibe+3 crossposts

Claude Code skill that delegates coding tasks to Mistral Vibe, saves ~2-4x on tokens, with mistral tokens at least 50% cheaper, and avoid hitting usage limits

TLDR; title says it all - use CC to delegate to Mistral vibe, save tokens, costs and avoid hitting limits.

Been using Claude Code for various side projects and kept hitting usage limits (i'm on Pro plan). At the same time i had Mistral Vibe which i did not use much because i appreciate CC's capacity to reason and structure its work.

So I'm sharing a skill that lets Claude Code delegate those tasks to Mistral Vibe while keeping Claude as the orchestrator - benefit from CC thinking and Mistral cheap labor. Vibe natively uses mistral-medium-3.5, inputs 1.5 USD/M tokens, output 7.5/M - to compare with Sonnet's 2x rates. I've observed in my usage i save 2x-4x claude tokens on big tasks.

Repo: github.com/pcx-wave/vibe-skill

Type /vibe before each instruction.

Claude decomposes the task, writes a self-contained prompt for Vibe, runs vibe-delegate, supervises the streaming output in real time, then checks the git diff before reporting back.

I had to tweak the skill quite a bit to get to a reliable stage because Vibe can have some rough edges - detailed in repo. I can certainly still be improved.

You need Vibe-CLI to use it. https://docs.mistral.ai/mistral-vibe/terminal

EDIT 13/5 : I've seen a few questions regarding this skill applied to other models. Note that Vibe can be configured to use any llm provider/model you want. Yes you can use vibe with deepseek/qwen/etc within. Your model would then access all vibe tools to do what it needs to.

u/pcx_wave — 1 day ago