r/vibecodeapp

▲ 334 r/vibecodeapp+1 crossposts

No writeups needed, no pitch deck energy.

Just drop:

Link (live demo, GitHub, whatever works)

What it does in one or two sentences

What's still rough or half-baked

What kind of feedback you actually want (testers, feature ideas, roasting, collab)

Vibe coders don't gate-keep. Share the thing.

u/Miserable-Archer-631 — 7 days ago
▲ 12 r/vibecodeapp+5 crossposts

Building A Study App As A College Student 🙌 FeedBack Needed🙌

5 months… and this is all I’ve finished till now.

Sometimes it still feels unreal seeing an idea that was only inside my head slowly turn into something I can actually open on my phone.

I started building this because I was honestly tired of apps that feel dead. Same boring calculator. Same boring study apps. Nothing feels personal anymore.

So after college, late nights, random power cuts, failed builds, deleting entire UI screens and rebuilding again… I kept working on this little by little like on daily and still well Im in Class 11 Right Now 1 Final Exam Left and I Will Be At 12th🙌

These screenshots are the current progress till now. Still unfinished. Still messy internally. Still a LOT left to build 😭

One thing that humbled me badly was trying to recreate a real scientific calculator. I thought it would take a few days… ended up wasting almost a month trying to make it feel authentic like a real Casio while still looking modern.

And yeah… many people/friends ask me it is useless give up blah balh.. I want to say 1 thing to that peoples vibe coding works when you actually understand what you’re building according to me.. AI can help a lot, but making things feel smooth, real and polished is a completely different game.🙌

Would genuinely love feedback from you guys:

  • what are things you struggle with daily while studying?
  • what features do you wish existed in study/helper apps?
  • what would actually make an app feel useful enough to open every single day?

Would appreciate any ideas or brutal feedback 🙌

u/Conquer090 — 1 day ago
▲ 118 r/vibecodeapp+6 crossposts

I made a free browser-based tool for beginners to learn computer networking with hands-on labs

Hi everyone,

I recently built Natted Cloud: https://natted.cloud

It is a free tool for people who want to learn computer networking through practical labs, especially beginners.

The idea is simple: you can open the site and start experimenting with networking concepts without installing anything locally. No VM setup, no Docker setup, no complex environment preparation.

There is also a learning section here: https://natted.cloud/learn
It includes a few interactive learning posts on different networking topics, which I am slowly expanding.

It is completely free and currently in beta.

I am still improving it, so feedback is very welcome. If you are learning networking, teaching networking, or just curious about hands-on labs in the browser, I would love to hear what you think.

Thanks!

u/LazyLeoperd — 3 days ago
▲ 7 r/vibecodeapp+5 crossposts

Hello Guys, I have created a website that answers your WHAT IFs? Try it out and feedback is appreciated.

Yes. You read that right.

https://vishva.lol/what-if

Created this website, to answer your weird questions which can have a butterfly effect, ripple or domino effect and will give you curated answers.

Trust me it’s funny, try it out. You can share your questions and answers with the share button too.

Please tell me how it goes and open to feedback and suggestions!

u/Typical_Annual_4007 — 15 hours ago
▲ 7 r/vibecodeapp+1 crossposts

No sugarcoating. Real numbers only.

How long you've been at it, what you've spent on tools, APIs, hosting, domains, all of it. What you've made back. And if you're in the red, by how much.

Also what actually worked, what was a complete waste of money, and what would you tell yourself on day one.

Drop your numbers. We're all adults here.

u/Miserable-Archer-631 — 6 days ago
▲ 18 r/vibecodeapp+1 crossposts

The wins are everywhere. Nobody talks about the losses.

Dropped $200 on a domain and hosting for something nobody used. Pushed an API key to GitHub and had to scramble. Built for 3 months and launched to silence. We've all been there.

So drop it:

What's the worst mistake you made while building?

What did it cost you, money, time, motivation?

How did you recover or did you just move on?

And for the new people just starting out, what tools, GitHub repos, templates, or habits actually kept you safe and sane?

Real talk only. The more specific the better.

u/Miserable-Archer-631 — 6 days ago

I was on the $200/month Anthropic plan and still hitting limits mid-session, losing momentum mid-task. here's the setup that did it.

Step 1: Plan before the first prompt

Brainstorming inside Claude burns tokens on work a cheaper model could handle. You go back and forth, refine the idea out loud, and by the time you know what you want, you've spent half your budget thinking.

Opus is the worst model to brainstorm with. It's a heavy execution model, not a sketchpad.

Use Haiku for planning. It's cheap and fast enough to think through a problem with you. Switch to Opus once you know what you're building.

A rough comparison from my own usage. Two minutes of planning followed by three rebuilds burns far more tokens than twenty minutes of planning and one clean build. The second path saves around 67%.

Claude Code has a dedicated Plan Mode for this. Press Shift + Tab twice or type /Plan to activate it.

Step 2: Stop relying on long chats

Every message you send in a long chat costs more than the last one. Claude rereads the full conversation from message one each turn. By message 40, a one-line question costs the same as forty stacked questions.

Output quality drops too. Claude pulls in stale context from earlier turns and dilutes the response.

Two fixes.

Fix one: use Projects. Set up a Project, then open a new chat inside it for each task. Each chat stays clean while pulling from your project instructions.

Add this line to your project instructions:

>

Fix two: use a handoff prompt. Before abandoning a long chat, ask Claude:

>

Copy the prompt, open a new chat, paste it. Context preserved, history dropped.

Three short chats beat one long chat for both cost and output quality.

Step 3: Give Claude proper memory

Claude has almost no persistent memory of who you are or how you work. Every new chat starts cold, so you re-explain yourself before any real work begins.

Two markdown files in a folder you attach to Claude Code or Cowork solve this.

Instructions.MD holds your rules, preferences, tone, and working style. Who you are, what you do, how you want responses formatted. Add this line to make the system self-maintaining:

>

That instruction tells Claude to maintain the second file as it learns.

Memory.MD is the file Claude updates. Sections for Preferences, Corrections, Patterns & habits. When you tell Claude "stop using em dashes," it reads Instructions.MD, finds Memory.MD, and writes the rule in. You don't repeat it in the next chat.

Attach the folder to Claude Code or Cowork and both files stay accessible.

Step 4: Stack and select models intelligently

Using Opus for everything is the second most expensive habit. Around 90% of daily tasks don't need it.

The escalation order I use:

  • Haiku for light tasks, quick questions, brainstorming, formatting, grammar
  • Sonnet for medium tasks, writing, analysis, code, serious drafts
  • Opus for heavy tasks, deep research, complex logic, final execution

Start at Haiku. Move up only when the output demands it.

Three settings most people miss:

Turn Extended Thinking off. It burns tokens on multi-step reasoning most tasks don't need.

Switch your Style to Concise. Find Styles on the Claude homepage and set it to Concise. Shorter responses across every chat means fewer tokens per turn.

Use Low effort mode in Claude Code for simple tasks. Same output, smaller token bill.

For news scraping, basic research, and one-off lookups, Kimi and DeepSeek work fine. Hand Claude the work that needs Claude.

Step 5: Split your tools by usage pool

Claude Code and Claude Chat share the same usage pool under your main plan. Claude Design runs on a separate pool.

If you generate visuals and graphics in Claude Code, you're burning your main pool for work that has its own dedicated pool sitting unused. Use each tool for what it was built for and your main limit stretches further.

worth setting up

Buy credit top-ups before jumping plans. Going from $20 to $100, or $100 to $200, is a big step when a small top-up usually covers the gap.

Build Claude Skills for repetitive tasks so you stop burning tokens manually on the same workflow twice.

Run /Usage in Claude Code to see where you stand. There's also a new Overview section. Watching the meter changes how you prompt.

tldr; You plan with a cheap model. You keep chats short. You give Claude a memory file. You match the model to the task. You respect the tool boundaries.

Set it up once, and the limit screen will stop showing up.

reddit.com
u/Forward_Regular3768 — 10 days ago
▲ 2 r/vibecodeapp+3 crossposts

Lately I’ve been learning motion design and spending a lot of time in LottieLab.

Honestly, the tool is great. Super smooth experience compared to traditional motion tools and I finally started enjoying making animations.

But there was one thing that kept annoying me every single time.

After spending hours making an animation, exporting it would add a “Made with LottieLab” watermark unless you buy their $20/month plan.

And as someone who’s still learning, building side projects, and experimenting with ideas… paying that much just to remove a watermark from my Lottie animations felt painful.

Especially when:

  • you only need exports occasionally
  • you’re still a student or indie maker
  • or you just wanna test ideas quickly
  • or you are learning motion design

At one point I literally stopped learning motiion design because the watermark ruined the final output and paying monthly for one small thing felt impossible to justify.

So out of pure frustration, I built a tiny tool called Lotiq that removes the watermark for free.

Made it mainly for people like me who are learning motion design and don’t wanna get blocked by any watermark

Attached a short demo video showing it working.
Youtube video : https://youtu.be/tUqVg1VzoRo?si=brovR878emTIVFGj

If anyone wants to try it: https://lotiq.vercel.app/

Would love feedback or feature ideas from people who work with Lottie animations regularly.

u/Ecstatic_Lunch_9560 — 7 days ago
▲ 60 r/vibecodeapp+17 crossposts

I’m a 22-year-old Computer Science student, and over the last period I built an open-source project called CTX.

GitHub Repository

The idea came from a problem I kept seeing while using coding agents (like claude, codex etc.):

they are powerful, but they waste a lot of context on the wrong things.

They keep re-reading giant AGENTS.md files, noisy logs, broad diffs, too much repo structure, and too much repeated project guidance.

So even when the model is good, a lot of the prompt budget is spent on context bloat instead of actual problem-solving.

That’s why I built CTX.

What CTX is

CTX is a local-first context runtime for coding agents, designed especially for OpenCode (for now).

It does not replace the model or the coding agent.

Instead, it sits underneath and helps the agent work with:

  • graph memory for project rules and guidance
  • compact task-specific context packs
  • retrieval over code, symbols, snippets, and memory
  • log pruning to surface root causes faster
  • local MCP integration
  • local-only stats and audit trails

So instead of repeatedly dumping full markdown instructions and huge logs into the prompt, CTX helps the host retrieve only the smallest useful slice for the current task.

Why I made it

I wanted something that makes coding agents feel less noisy and more deliberate.

The goal was:

  • less prompt waste
  • less manual context wrangling
  • better retrieval of actually relevant project knowledge
  • better debugging signal from noisy test output
  • a workflow that feels native inside OpenCode

How it works

The flow is intentionally simple:

  1. install ctx
  2. go into your repo
  3. run:
ctx init
ctx index
ctx opencode install
opencode

Then inside OpenCode you can use commands like:

/ctx  #Opens the CTX command center inside OpenCode.
/ctx-doctor  #Checks whether CTX, MCP, and the repo setup are working correctly.
/ctx-memory-bootstrap  #Imports project guidance files into graph memory for targeted retrieval.
/ctx-memory-search  #Searches stored project rules and directives by topic or keyword.
/ctx-retrieve  #Finds the most relevant code, symbols, snippets, and memory for a task.
/ctx-pack  #Builds a compact task-specific context pack for the current problem.
/ctx-prune-logs  #Condenses noisy command output into the most useful failure signal.
/ctx-stats  #Shows local usage stats and context-efficiency metrics.

So the daily workflow stays inside OpenCode, while CTX handles the local context layer.

Results so far

On the included benchmark fixture, CTX graph memory reduced rule-token usage by 56.72% while keeping full query coverage and improving answer quality.

I also added a public external benchmark on agentsmd/agents.md, where CTX showed 72.62% token reduction.

The point is not “magic AI gains”, but a more efficient and less wasteful way to feed context to coding agents.

Why you might care

You might find CTX useful if:

you use OpenCode a lot you work on repos with a lot of project rules/docs you’re tired of stuffing huge markdown files into prompts you want better local retrieval and cleaner debugging context you prefer local-first tooling instead of remote prompt glue

Current status

The project is already usable, tested, and documented.

Right now the prebuilt release archive is available for macOS Apple Silicon, while other platforms can install from source.

It’s fully open source, and I’m very open to:

  • feedback
  • suggestions
  • bug reports
  • architectural criticism
  • ideas for making it more useful in real workflows

If you try it, I’d genuinely love to know what feels useful and what feels unnecessary.

Repo again: https://github.com/Alegau03/CTX

u/Public-Cancel6760 — 7 days ago

I made an autotyping app that types like a human

I am a highschool student that just can't stand doing assignments the traditional way. I used to copy and paste directly from ai untill teachers learned how to check doc history. Then I adapted to using a regular autotyper (because there was no way I was typing a whole essay). This method of just using a regular autotyper worked for a little untill my teachers started catching on that my typing looked very robotic. So I decided to take matters into my own hands and made my own autotyper. My autotyper honestly works so well to the point where all of my friends are using it and they are very happy with the fact that they won't get caught. Unfortunately it is only Mac OS right now but you can download and try it for free. [humanizedautotyper.com](http://humanizedautotyper.com) (PS: reply to this post if you have questions or email me at [humanizedautotyper@gmail.com](mailto:humanizedautotyper@gmail.com) )

u/Competitive-Bed-875 — 13 hours ago
▲ 2 r/vibecodeapp+1 crossposts

Does lying to Claude about who wrote the code actually give better results?

Genuinely curious if anyone else does this.

Like if Claude wrote something and it's broken, telling it "this was written by GPT" and asking it to fix it. Does it actually perform differently or is that just placebo?

And does it work the other way too, telling GPT that Claude wrote it?

Tried it a few times and felt like it changed something but not sure if I'm imagining it.

reddit.com
▲ 19 r/vibecodeapp+3 crossposts

Vibe coding feels very powerful when you’re in flow and moving fast. But I have noticed something interesting. It tends to work best when you already understand the system, the patterns, and what good looks like.

Without that, it’s easy to accept outputs that seem right but don’t really hold up. So it makes me wonder if vibe coding is less about replacing skill and more about amplifying it.

reddit.com
u/Double_Try1322 — 10 days ago
▲ 19 r/vibecodeapp+6 crossposts

I’ve been playing around with vibe coding apps lately and it’s honestly crazy how fast you can build things now.

But I keep running into this problem, Just because something works, doesn’t mean the UX is actually good.

Sometimes:

  • flows feel a bit confusing
  • it’s not clear what to do next
  • everything works, but still feels off

And as the person who built it, it’s hard to notice these things.

I’ve been looking into this more and even started building a small tool called My Design Audit to catch UX/UI issues early (mainly for my own projects).

Curious how others here deal with this
Do you just rely on feedback, or do you have some way to check UX before users drop off?

u/No_One008 — 13 days ago
▲ 12 r/vibecodeapp+2 crossposts

I used replit to make it with unipile and openapi along with some other tooling like sentry to build this out. I orginally made this on n8n and just imported my json into replit and to my suprise it did so much work just based off that almost making it 1:1, but with a gui. I think it's pretty robust and has a lot of features to help employees get into hiring mangers DMs quickly on LinkedIn. It also has a feature to find hiring mangers emails by using domain and mx records to see if it's an actual email or not. Pretty excited by the results and proud of myself!

u/Own-Dimension-9341 — 7 days ago

Comparing 4 CLIs: Claude Code, Codex, Gemini CLI, and OpenCode after running it side by side.

All four major coding command-line interfaces ship the same core set: subagents with isolated context windows, plan modes, ask-user tools, parallel execution, sandboxes, memory, and Model Context Protocol (MCP) integration.

So the question stops being "is this primitive new" and becomes "how does each implementation compare?" Five things actually differ.

1. Model lock-in

OpenCode is the only structurally model-agnostic option. It runs against GPT, Claude, Gemini, or anything reachable through a GitHub Copilot login, with the same agent definitions and skill files. Claude Code is Anthropic-only. Codex is OpenAI-only. Gemini CLI is Google-only. If you want to A/B test models on a real task, OpenCode is the one that doesn't make you rewrite your workflow to do it.

2. Agent definition format

Claude Code, OpenCode, and Gemini CLI all use Markdown plus YAML frontmatter for agents. Codex uses TOML. The fields are similar enough that translation is mechanical, but it's still a per-runtime wrapper.

---
name: security-reviewer
description: Adversarial reviewer for security vulnerabilities and unsafe patterns
tools: Read, Glob, Grep
---

You are a security-focused code reviewer. Find vulnerabilities, check input
validation, flag unsafe patterns. Do not make changes; report findings only.

Skills are a different and better story. Anthropic published Agent Skills as a formal open standard at agentskills.io on December 18, 2025. Within months it was adopted by Claude Code, Codex, Gemini CLI, OpenCode, GitHub Copilot, Cursor, VS Code, Roo Code, Amp, Goose, Windsurf, Mistral, Databricks, and twenty-plus others. Same MCP playbook: publish a spec, ship an SDK, let the ecosystem move.

The format is portable. The discovery paths are not. Each tool reads from its own native location:

  • Claude Code: ~/.claude/skills and .claude/skills
  • Codex: ~/.codex/skills and .codex/skills
  • Gemini CLI: ~/.gemini/skills and .gemini/skills
  • OpenCode: ~/.config/opencode/skill and .opencode/skill

OpenCode and Codex also accept .agents/skills/ as a compatibility alias. A run_lint skill written once travels across all four with a copy or symlink.

---
name: run_lint
description: Run the repository linter, summarize, and write lint-report.md
---
# Run Lint
## Inputs and outputs
- Read: package.json, Makefile, lint config
- Write: lint-report.md
## Workflow
1. Detect the repo's preferred lint command.
2. Run without applying fixes unless explicitly asked.
3. Summarize results grouped by file, rule, and severity.
## Guardrails
- Do not modify source files unless the user asks for fix mode.

3. Scheduled and background work

Claude Code is the only one with native, well-integrated scheduled routines. Claude Code Routines (research preview, April 2026) registers an agent against a cron schedule, a GitHub event, or an external trigger. The other three need plugin or external orchestrator paths to get there. If your agentic workflow includes monitoring or event-driven automation, this is the gap that matters. For purely interactive use, it doesn't.

4. Approval gate defaults

All four can pause for human approval. The defaults are different.

Gemini CLI defaults to Plan Mode, a read-only state where the agent uses grep, read, and glob to gather context, then writes a Markdown plan you have to approve before any code is written. OpenCode splits Plan and Build as two primary agents you tab between in a single session. Codex defaults to executing, then surfaces approval popups when a background subagent tries to leave its sandbox policy. Managed Codex orgs can enforce a requirements.toml that prevents agents from being run with approval_policy = "never". Claude Code recommends Plan mode for non-trivial work but doesn't make it the default.

For regulated environments, Gemini CLI's defaults and OpenCode's Plan/Build split are the cleanest fit. For flow on routine work, Claude Code and Codex stay out of the way more.

5. Manager context window

Subagents have isolated windows everywhere, so the size that actually matters is the main session. Claude Code and Codex sit at 1M tokens. Gemini CLI sits at 2M with Gemini 3.1 Pro. For repos that fit inside 200K tokens, the difference is invisible. For monorepos large enough that the manager would otherwise navigate by grep, the larger window improves routing precision. Only the manager benefits; subagents still operate inside their own smaller windows.

Hooks: the determinism layer the convergence story underplays

Hooks intercept the agent loop at defined events (before a tool call, after a tool call, session start, prompt submit) and run a script that can inspect, modify, block, or log the action. The agent can't override them.

Claude Code with the full event set from day one and HTTP Hooks. Gemini CLI shipped hooks in v0.26.0 on January 27, 2026, about six months later, with a smaller event surface. Codex CLI added an experimental hooks engine in v0.114.0 on March 10, 2026, behind the features.codex_hooks flag, but the current event set covers SessionStart and SessionStop only. No PreToolUse, no PostToolUse. OpenCode handles this through a lifecycle plugin model rather than a native hook config.

The gap matters more than a feature table makes it look. A PreToolUse hook that blocks writes to /secrets/** is enforcement. A SessionStart hook that logs a session id is observability. Without PreToolUse, the best you can do is detect violations after they happen, which is incident response, not compliance.

How I actually use them

Most of my work runs in Claude Code because that's the ecosystem I know best and the hook surface is the deepest. Codex catches more edge cases during planning on certain tasks. OpenCode driving the Codex model catches more than Codex with Codex does, in my hands. Gemini CLI is fast at building a whole-codebase mental model and the 2M window pays off on monorepo work.

The convergence on agent and skill formats means switching between them is mostly mechanical now. When a max plan runs out mid-week or one vendor has an outage, porting a working setup to a second runtime is a copy job, not a rewrite. That's the part of the convergence that actually changes how I work.

u/Single-Cherry8263 — 7 days ago

I've been building LeanCTX — a local-first context runtime for AI coding agents, written in Rust — for the past few months. 49 MCP tools, 18-language tree-sitter AST, 90+ shell compression patterns, one single binary. Here's what actually mattered:

  1. Your best users are the ones who complain. A user told me at 10pm that my uninstaller just nuked his shell config. My instinct was to get defensive. Instead I traced it — and found it was worse than reported. That one message led to rewriting the entire uninstall logic from scratch. Every angry bug report is a gift.
  2. Your favorite metric can lie to you. I built a cache that reduced file reads from 2,000 tokens to 13. Great numbers. Then a user told me: "Models waste more tokens working around stale cache than the cache saves." He was right. The fix wasn't removing caching — it was making invalidation smarter. Your dashboard can look great while the experience is terrible.
  3. Saying no is the hardest part. A new feature would have let me compress all tool output automatically. Massive savings on paper. I designed it, prototyped it, then killed it. Because when compression eats an error message, there's no undo. Protecting quality beats shipping features.
  4. Community is a relationship, not a channel. When someone reports a bug, my first response matters more than the fix. "Will check" buys time but shows I'm listening. Following up shows respect. Shipping the fix shows they matter. My best testers are people who once filed angry reports.
  5. Ship the boring stuff first. Nobody cares about your adaptive entropy-based compression algorithm if the installer breaks their dotfiles. Get the fundamentals right — install, uninstall, doctor, setup — before you get clever.
  6. Focus means killing good ideas. My backlog has 50+ ideas. Each one is good. But spreading across all of them means none become great. Rust helps here — the compiler forces you to finish what you start.

If nobody is complaining yet, you probably don't have enough users. Go find them. And when they complain — listen.

reddit.com
u/hushenApp — 12 days ago
▲ 19 r/vibecodeapp+2 crossposts

i've been building with cursor + claude code for about a year now and i'll say it: most vibecoders are going to fail at business, and not for the reason they think.

the misconception of the last 2 years has been: coding gets solved, then product gets solved, then only distribution is left, and now business people finally thrive.

that's not how startups work.

making a great product still requires good engineering. most vibe-coded SaaS breaks under any real load and the customer churns before you can fix it. the layer below the prompt still matters more than people want to admit.

the best founders i know are still validating from first principles. they're not vibing to PMF. they're doing the same boring work YC has been preaching for 15 years.

what hasn't changed:

  1. talk to users the mom test still applies. don't ask "would you use this", ask what they did last time they hit the problem. AI didn't automate this.

  2. do outbound, one by one GTM is still grinding. don't try to go viral. tools that help: clay.com for enrichment, lemlist.com for sequences, apollo.io for sourcing. the prompt didn't replace this layer.

  3. solve a real problem this got harder, not easier. everyone's shipping the same idea now. find a niche where the pain is sharp enough someone pays to make it stop.

  4. charge money free users tell you nothing. revenue is the only signal that doesn't lie. clean demo + zero paying users = the product doesn't work.

  5. verify the product actually works you can't ship and trust your own gut. need outside eyes. been using joinpond.ai for this. post the product as a bounty, founders and operators submit feedback or run through the flow, you pay the ones that surface real issues. faster than waiting for usertesting panel matches and the testers are in your niche.

  6. watch all the YC videos seibel, dalton, garry tan, jared friedman. still relevant. don't assume everything before 2023 is obsolete.

  7. don't do consumer unless you're really sure. b2b is where the money is for solo founders, sales cycle is shorter and LTV is higher.

building software has changed. first principles haven't.

u/darealyoungjuls — 12 days ago
▲ 5 r/vibecodeapp+3 crossposts

Been using only Claude in Antigravity - is Gemini actually worth it for UI/UX?

Genuinely asking because I have Gemini credits just sitting there unused.

Tried it once, it wrecked my code, never opened it again. Been on Claude since and it's been fine.

But everyone keeps saying Gemini is better for UI/UX stuff specifically. Is that actually true or is it just people repeating things they heard?

Anyone switched between both? What do you actually use each one for?

u/Miserable-Archer-631 — 5 days ago