r/vibecoding

Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.
🔥 Hot ▲ 695 r/ClaudeCode+2 crossposts

Anthropic Just Pulled the Plug on Third-Party Harnesses. Your $200 Subscription Now Buys You Less.

Starting April 4 at 12pm PT, tools like OpenClaw will no longer draw from your Claude subscription limits. Your Pro plan. Your Max plan. The one you're paying $20 or $200 a month for. Doesn't matter. If the tool isn't Claude Code or Claude.ai, you're getting cut off.

This is wild!

Peter Steinberger quotes "woke up and my mentions are full of these

Both me and Dave Morin tried to talk sense into Anthropic, best we managed was delaying this for a week.

Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source."

Full Detail: https://www.ccleaks.com/news/anthropic-kills-third-party-harnesses

u/abhi9889420 — 15 hours ago
Fixed my ASO changes & went from Invisible to Getting Downloads.
▲ 6 r/vibecoding+1 crossposts

Fixed my ASO changes & went from Invisible to Getting Downloads.

here's what i changed. My progress & downloads was visible after 2 months. it didn;t change overnight after making the changes.

i put the actual keyword in the title

my original title was just the app name. clean, brandable, completely useless to the algorithm. apple weights the title higher than any other metadata field and i was using it for branding instead of ranking.

i changed it to App Name - Primary Keyword. the keyword after the dash is the exact phrase users type when searching for an app like mine. 30 characters total. once i made that change, rankings moved within two weeks.

i stopped wasting the subtitle

i had a feature description in the subtitle. something like "the fastest way to do X." no one searches for that. i rewrote it with my second and third priority keywords in natural language. the subtitle is the second most indexed field treating it like ad copy instead of a keyword field was costing me rankings.

i audited the keyword field properly

100 characters. i'd been repeating words already in my title and subtitle, which does nothing apple already indexes those. i stripped every duplicate and filled the field with unique terms only.

the research method that actually worked: app store autocomplete. type your core category into the search bar and read the suggestions. those are real searches from real users. i found terms i hadn't considered and added the ones not already covered in my title and subtitle.

i redesigned screenshot one

i had a ui screenshot first. looked fine, showed the app, converted nobody. users see the first two screenshots in search results before they tap it's the first impression before they've read a word.

i redesigned it to show the result state what the user's situation looks like after using the app with a single outcome headline overlaid. one idea, one frame, immediately obvious. conversion improved noticeably within the first week.

i moved the review prompt

my rating was sitting at 3.9. i had a prompt firing after 5 sessions. session count tells you nothing about whether the user is happy right now.

i moved it to trigger after the user completed a specific positive action — the moment they'd just gotten value. rating went from 3.9 to 4.6 over about 90 days. apple factors ratings into ranking, so that lift improved everything else downstream.

i stopped doing it manually

the reason i'd never iterated on aso before was the friction. updating screenshots across every device size, touching metadata, resubmitting builds it was tedious enough to avoid.

i set up fastlane. it's open source, free, and handles screenshot generation across device sizes and locales, metadata updates, and submission, managing provisioning profiles, pushing builds. once your lanes are configured,

for submission and build management i switched to asc cli OpenSource app store connect from the terminal, no web interface. builds, testflight, metadata, all handled without leaving the command line.

The app was built with VibecodeApp, which scaffolds the expo project with localization and build config already set up. aso iteration baked in from day one.

what i'd do first if starting over

  1. move the primary keyword into the title
  2. rewrite the subtitle with keyword intent, not feature copy
  3. audit the keyword field, strip duplicates, fill with unique terms
  4. redesign screenshot one as a conversion asset
  5. fix the review prompt trigger
  6. set up fastlane so iteration isn't painful
u/Veronildo — 17 minutes ago
🔥 Hot ▲ 473 r/vibecoding

The real cost of vibe coding isn’t the subscription. It’s what happens at month 3.

I talk to non-technical founders every week who built apps with Lovable, Cursor, Bolt, Replit, etc. The story is almost always the same.

Month 1: This is incredible. You go from idea to working product in days. You feel like you just unlocked a cheat code. You’re mass texting friends and family the link.

Month 2: You want to add features or fix something and the AI starts fighting you. You’re re-prompting the same thing over and over. Stuff that used to take 5 minutes now takes an afternoon. You start copy pasting errors into ChatGPT and pasting whatever it says back in.

Month 3: The app is live. Maybe people are paying. Maybe you got some press or a good Reddit post. And now you’re terrified to touch anything because you don’t fully understand what’s holding it all together. You’re not building anymore, you’re just trying not to break things.

Nobody talks about month 3. Everyone’s posting their launch wins and download milestones but the quiet majority is sitting there with a working app they’re scared to change.

The thing is, this isn’t a vibe coding problem. It’s a “you need a developer at some point” problem. The AI got you 80% of the way there and that’s genuinely amazing. But that last 20%, the maintainability, the error handling, the “what happens when this thing needs to scale”, that still takes someone who can actually read the code.

Vibe coding isn’t the end of developers. It’s the beginning of a new kind of founder who needs a different kind of developer. One who doesn’t rebuild your app from scratch but just comes in, cleans things up, and makes sure it doesn’t fall apart.

If you’re in month 3 right now, you’re not doing it wrong. You just got further than most people ever do. The next step isn’t learning to code, it’s finding the right person to hand the technical side to so you can get back to doing what you’re actually good at.

Curious how many people here are in this spot right now.

reddit.com
u/vibecodejanitors — 23 hours ago
🔥 Hot ▲ 68 r/vibecoding

I’m wrong! I thought I can vibe code for the rest of my life! - said by my client who threw their slop code at me to fix

I’m seeing this new wave of people bringing in slop code and asking professionals to fix it.

Well, it’s not even fixable, it needs to be rewritten and rearchitected.

These people want it done in under a few hundred dollars and within the same day.

These cheap AI models and vibe coding platforms are not meant for production apps, my friends! Please understand. Thank you.

reddit.com
u/conquer_bad_wid_good — 9 hours ago
▲ 45 r/startups+1 crossposts

Feels like half the AI startup scene is just people roleplaying as founders [i will not promote]

Everyday it’s “vibe code your startup”, “AI will change everything”, “one prompt = business”

but when you ask what people are actually using daily, it gets very quiet

feels like a lot of people are just funding LLM companies by burning tokens and calling it innovation

of course there are some genuinely useful things being built

  1. one guy vibe coded a panic button app for trading, press one button and it exits all his stock positions instantly

  2. another guy used AI to research treatment options for his dog’s cancer

that’s the kind of stuff that actually feels meaningful

real problem
real stakes
real usefulness

half this stuff feels like people using AI to avoid doing the actual hard part

the useful AI stuff is usually just one real problem, one useful fix, done

what’s the most genuinely useful AI thing you’ve seen built?

reddit.com
u/MotorRequirement7617 — 9 hours ago
▲ 37 r/ClaudeAI+1 crossposts

Sonnet rate limits are forcing me to rethink my whole workflow

I live in Claude Code with Sonnet on Middle Effort. Works great until Thursday or Friday hits and I slam the rate limit, then I'm stuck switching to Opus for things that don't need it. It's annoying enough that I'm actually thinking about how to design my work differently.

The frustrating part isn't that limits exist - it's that Anthropic clearly knows Sonnet is the workhorse model and set the ceiling knowing that. I get why from their side, but as someone who uses this daily for refactoring and architecture work, it forces me into these awkward moments where I have to decide: do I wait, or do I burn Opus tokens on something that would've been fine with Sonnet?

I'm genuinely curious how others handle this. Are you batching work differently? Switching models strategically? Or do you just accept the friction and use Opus when you need it? The ideal would be some way to know in advance what actually needs Opus intelligence versus what Sonnet can handle, but that's basically asking the model to rate its own capability.

reddit.com
u/Temporary_Layer7988 — 1 day ago
▲ 33 r/vibecoding+1 crossposts

Spent months on autonomous bots - they never shipped. LLMs are text/code tools, period.

I tested Figma's official AI skills last month. Components fall apart randomly, tokens get misused no matter how strict your constraints are - the model just hallucinates. And here's what I realized: current LLMs are built for text and code. Graphics tasks are still way too raw.

This connects to something bigger I've been thinking about. I spent months trying to set up autonomous bots that would just... work. Make decisions, take initiative, run themselves. It never happened. The hype around "make a billion per second with AI bots" is noise from people who don't actually do this work.

The gap between what LLMs are good at (writing, coding) and what people pitch them as (autonomous agents, design systems, full-stack reasoning) is massive. I've stopped trying to force them into roles they're not built for.

What actually works: spec first, then code. Tell Claude exactly what you want, get production-ready output in one pass. That's the real workflow. Not autonomous loops, not agents with "initiative" - just clear input, reliable output.

Anyone else spent time chasing the autonomous AI dream before realizing the tool is better as a collaborator than a replacement?

reddit.com
u/Temporary_Layer7988 — 2 days ago
Wrapped a ChatGPT bedtime story habit into an actual app. First thing I've ever shipped.

Wrapped a ChatGPT bedtime story habit into an actual app. First thing I've ever shipped.

Background: IT project manager, never really built anything. Started using ChatGPT to generate personalized stories for my son at night. He loved it, I kept doing it, and at some point I thought — why not just wrap this into a proper app.

Grabbed Cursor, started describing what I wanted, and kind of never stopped. You know how it is. "Just one more feature." Look up, it's 1am. The loop is genuinely addictive — part sandbox, part dopamine machine. There's something almost magical about describing a thing and watching it exist minutes later.

App is called Oli Stories. Expo + Supabase + OpenAI + ElevenLabs for the voice narration. Most of the stack was scaffolded through conversations with Claude — I barely wrote code, I described it. Debugging was the hardest part when you have no real instinct for why something breaks.

Live on Android, iOS coming soon (but with Iphone at home more difficult to progress on :D).

Would be cool if it makes some $, but honestly the journey was the fun part. First thing I've ever published on a store, as someone who spent 10 years managing devs without ever being one.

here the link on play store for those curious, happy to receive few rating at the same time the listing is fresh new in production: Oli app.

and now I'm already building the next thing....

u/LevelGold4909 — 40 minutes ago
🔥 Hot ▲ 104 r/vibecoding

is anyone vibe coding stuff that isn't utility software?

every time i see a vibe coding showcase it's a saas tool, a dashboard, a landing page, a crud app. which is fine. but it made me wonder if we're collectively sleeping on the other half of what software can be.

historically some of the most interesting software ever written was never meant to be useful. the demoscene was code as visual art. esoteric languages were code as philosophy. games and interactive fiction were code as storytelling. bitcoin's genesis block had a newspaper headline embedded in it as a political statement.

software has always been a medium for expression, not just function. the difference is that expression used to require mass technical skill. now it doesn't.

so i'm genuinely asking: is anyone here building weird, expressive, non-utility stuff with vibe coding? interactive art, games, experimental fiction, protest software, things that exist purely because the idea deserved to exist?

or is the ecosystem naturally pulling everyone toward "practical" projects? and if so, is that a problem or just the natural order of things?

reddit.com
u/ConstantContext — 17 hours ago
Image 1 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
Image 2 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
Image 3 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
Image 4 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
Image 5 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
Image 6 — I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing
▲ 5 r/vibecoding+1 crossposts

I built a 17-stage pipeline that compiles an 8-minute short film from a single JSON schema — no cameras, no crew, no manual editing

The movie is no longer the final video file. The movie is the code that generates it.

The result: The Lone Crab — an 8-minute AI-generated short film about a solitary crab navigating a vast ocean floor. Every shot, every sound effect, every second of silence was governed by a master JSON schema and executed by autonomous AI models.

The idea: I wanted to treat filmmaking the way software engineers treat compilation. You write source code (a structured schema defining story beats, character traits, cinematic specs, director rules), you run a compiler (a 17-phase pipeline of specialized AI "skills"), and out comes a binary (a finished film). If the output fails QA — a shot is too short, the runtime falls below the floor, narration bleeds into a silence zone — the pipeline rejects the compile and regenerates.

How it works:

The master schema defines everything:

  • Story structure: 7 beats mapped across 480 seconds with an emotional tension curve. Beat 1 (0–60s) is "The Vast and Empty Floor" — wonder/setup. Beat 6 (370–430s) is "The Crevice" — climax of shelter. Each beat has a target duration range and an emotional register.
  • Character locking: The crab's identity is maintained across all 48 shots without a 3D rig. Exact string fragments — "mottled grey-brown-ochre carapace", "compound eyes on mobile eyestalks", "asymmetric claws", "worn larger claw tip" — are injected into every prompt at weight 1.0. A minimum similarity score of 0.85 enforces frame-to-frame coherence.
  • Cinematic spec: Each shot carries a JSON object specifying shot type (EWS, macro, medium), camera angle, focal length in mm, aperture, and camera movement. Example: { "shotType": "EWS", "cameraAngle": "high_angle", "focalLengthMm": 18, "aperture": 5.6, "cameraMovement": "static" } — which translates to extreme wide framing, overhead inverted macro perspective, ultra-wide spatial distortion, infinite deep focus, and absolute locked-off stillness.
  • Director rules: A config encoding the auteur's voice. Must-avoid list: anthropomorphism, visible sky/surface, musical crescendos, handheld camera shake. Camera language: static or slow-dolly; macro for intimacy (2–5 cm above floor), extreme wide for existential scale. Performance direction for voiceover: unhurried warm tenor, pauses earn more than emphasis, max 135 WPM.
  • Automated rule enforcement: Raw AI outputs pass through three gates before approval. (1) Pacing Filter — rejects cuts shorter than 2.0s or holds longer than 75.0s. (2) Runtime Floor — rejects any compile falling below 432s. (3) The Silence Protocol — forces voiceOver.presenceInRange = false during the sand crossing scene. Failures loop back to regeneration.

The generation stack:

  • Video: Runway (s14-vidgen), dispatched via a prompt assembly engine (s15-prompt-composer) that concatenates environment base + character traits + cinematic spec + action context + director's rules into a single optimized string.
  • Voice over: ElevenLabs — observational tenor parsed into precise script segments, capped at 135 WPM.
  • Score: Procedural drone tones and processed ocean harmonics. No melodies, no percussion. Target loudness: −22 LUFS for score, −14 LUFS for final master.
  • SFX/Foley: 33 audio assets ranging from "Fish School Pass — Water Displacement" to "Crab Claw Touch — Coral Contact" to "Trench Organism Bioluminescent Pulse". Each tagged with emotional descriptors (indifferent, fluid, eerie, alien, tentative, wonder).

The color system:

Three zones tied to narrative arc:

  • Zone 1 (Scenes 001–003, The Kelp Forest): desaturated blue-grey with green-gold kelp accents, true blacks. Palette: desaturated aquamarine.
  • Zone 2 (Scenes 004–006, The Dark Trench): near-monochrome blue-black, grain and noise embraced, crushed shadows. Palette: near-monochrome deep blue-black.
  • Zone 3 (Scenes 007–008, The Coral Crevice): rich bioluminescent violet-cyan-amber, lifted blacks, first unmistakable appearance of warmth. Palette: bioluminescent jewel-toned.

Pipeline stats:

828.5k tokens consumed. 594.6k in, 233.9k out. 17 skills executed. 139.7 minutes of compute time. 48 shots generated. 33 audio assets. 70 reference images. Target runtime: 8:00 (480s ± 48s tolerance).

Deliverable specs: 1080p, 24fps, sRGB color space, −14 LUFS (optimized for YouTube playback), minimum consistency score 0.85.

The entire thing is deterministic in intent but non-deterministic in execution — every re-compile produces a different film that still obeys the same structural rules. The schema is the movie. The video is just one rendering of it.

I'm happy to answer questions about the schema design, the prompt assembly logic, the QA loop, or anything else. The deck with all the architecture diagrams is in the video description.

----
Youtube - The Lone Crab -> https://youtu.be/da_HKDNIlqA

Youtube - The concpet I am building -> https://youtu.be/qDVnLq4027w

u/pedroanisio — 2 hours ago
Tested Gemma 4 as a local coding agent on M5 Pro. It failed. Then I found what actually works.

Tested Gemma 4 as a local coding agent on M5 Pro. It failed. Then I found what actually works.

I spent few hours testing Gemma 4 locally as a coding assistant on my MacBook Pro M5 Pro (48GB). Here's what actually happened.

Google just released Gemma 4 under Apache 2.0. I pulled the 26B MoE model via Ollama (17GB download). Direct chat through `ollama run gemma4:26b` was fast. Text generation, code snippets, explanations, all snappy. The model runs great on consumer hardware.

Then I tried using it as an actual coding agent.

I tested it through Claude Code, OpenAI Codex, Continue.dev (VS Code extension), and Pi (open source agent CLI by Mario Zechner). With Gemma 4 (both 26B and E4B), every single one was either unusable or broken.

Claude Code and Codex: A simple "what is my app about" was still spinning after 5 minutes. I had to kill it. The problem is these tools send massive system prompts, file contents, tool definitions, and planning context before the model even starts generating. Datacenter GPUs handle that easily. Your laptop does not.

Continue.dev: Chat worked fine but agent mode couldn't create files. Kept throwing "Could not resolve filepath" errors.

Pi + Gemma 4: Same issue. The model was too slow and couldn't reliably produce the structured tool calls Pi needs to write files and run commands.

At this point I was ready to write the whole thing off. But then I switched models.

Pulled qwen3-coder via Ollama and pointed Pi at it. Night and day. Created files, ran commands, handled multi-step tasks. Actually usable as a local coding assistant. No cloud, no API costs, no sending proprietary code anywhere.

So the issue was never really the agent tools. It was the model. Gemma 4 is a great general-purpose model but it doesn't reliably produce the structured tool-calling output these agents depend on. qwen3-coder is specifically trained for that.

My setup now:

- Ollama running qwen3-coder (and gemma4:26b for general chat)

- Pi as the agent layer (lightweight, open source, supports Ollama natively)

- Claude Code with Anthropic's cloud models for anything complex

To be clear, this is still experimental. Cloud models are far ahead for anything meaningful. But for simple tasks, scaffolding, or working on code I'd rather keep private, having a local agent that actually works is a nice option.

  1. Hardware: MacBook Pro M5 Pro, 48GB unified memory, 1TB
  2. Models tested: gemma4:26b, gemma4:e4b, qwen3-coder
  3. Tools tested: Claude Code, OpenAI Codex, Continue.dev, Pi
  4. Happy to answer questions if anyone wants to try a similar setup.

https://preview.redd.it/xt8bqfoed6tg1.png?width=1710&format=png&auto=webp&s=2b378670f3a22248f0f81eef1ec1d881d4f11ff0

reddit.com
u/terdia — 2 hours ago
🔥 Hot ▲ 109 r/vibecoding

Im a security engineer, I'll try to hack your vibe-coded app for free (10 picks)

I've spent 3+ years as a security engineer at Big Tech and have a bug bounty track record. I've been watching how many vibe-coded apps ship with the same critical security gaps.

I'm offering 10 free manual pentests for apps built with Lovable, Bolt, Cursor, or Replit.

What you get:

  • Manual security assessment (not just running scanners). I try to break your app the way a real attacker would, and verify whether each finding actually matters.
  • 2-3 hour assessment of your live app
  • Written report with every finding, severity rating, its impact and why it matters

What I get:

  • Permission to write about the findings (anonymized, no app names)
  • An honest testimonial if you found it valuable

What I'm looking for:

  • Deployed apps built with Lovable, Cursor, Bolt, Replit Agent, v0, or similar
  • Bonus points if you have real users or are about to launch (higher stakes = more interesting findings)
  • Your permission to test

Drop a comment with what you've built and what tools you've used (a live link would be very helpful too) and whatever other info you would like to share. I'll pick 10 and DM you.

Note: I'm not selling anything. I'm exploring this niche and need real-world data. If you want help fixing what I find after, we can talk about that separately. You walk away with a full report regardless.

Edit: I have gotten a lot of DMs and way more interest than I expected. I'm going to keep this open for a few more days and will likely take on more than 10. Keep dropping your projects in the comments. You could also DM me if youd want to keep the project private.

reddit.com
u/blueguy008 — 21 hours ago
🔥 Hot ▲ 80 r/vibecoding+1 crossposts

Farm Sim 100% made with AI - 6h build so far

Hello everyone,

I posted my Diablo 2 build yesterday, and thought I'd share some more games I'm trying to build (with the correct flair this time),

This is a farm simulator where the goal is to survive 10 nights, and build up your farm with plants, animals and food to survive. I started this morning and this is how far I am so far

Happy so share some prompts that got me started! (I'll post an update later on my Diablo 2 ARPG progress)

u/sharkymcstevenson2 — 1 day ago
Day 9 — Building in Public: Mobile First 📱
▲ 6 r/IndieDev+4 crossposts

Day 9 — Building in Public: Mobile First 📱

I connected my project to Vercel via CLI, clicked the “Enable Analytics” button…

and instantly got real user data.

Where users came from, mobile vs desktop usage, and bounce rates.

No complex setup. No extra code.

That’s when I realized: 69% of my users are on mobile (almost 2x desktop).

It made sense.

Most traffic came from Threads, Reddit, and X — platforms where people mostly browse on mobile.

So today, I focused on mobile optimization.

A few takeaways:

• You can’t fit everything like desktop → break it into steps

• Reduce visual noise (smaller icons, fewer labels)

• On desktop, cursor changes guide users → on mobile, I had to add instructions like “Tap where you want to place the marker”

AI-assisted coding made this insanely fast. What used to take days now takes hours.

We can now ship, learn, and adapt much faster.

That’s why I believe in building in public.

Don’t build alone. I’m creating a virtual space called Build In Live, where builders can collaborate, share inspiration, and give real-time feedback together. If you want a space like this, support my journey!

#buildinpublic #buildinlive

u/Chemical_Emu_6555 — 7 hours ago

Vibe coding is fun until your own code becomes a black box

I've been vibe coding for about 6 months now. Built a side project, a small SaaS, even helped a friend's startup ship an MVP in a weekend. It's incredible.

But here's what nobody talks about: three months later, when I need to add a feature or fix a bug in something I "wrote" — I have no idea how my own code works.

I prompted my way through it. The AI made architectural decisions I didn't review. Now I'm staring at files I technically created but can't explain to a teammate. I'm essentially a

tourist in my own codebase.

The worst part? When something breaks, I can't debug it. I don't know why the auth middleware calls the refresh token endpoint twice. I didn't write that logic. I just said "add

token refresh" and moved on.

So I started doing something different: after I vibe code a feature, I go back and actually learn what was generated. Not line by line — that's soul-crushing. More like: what's the

flow, what are the key functions, what are the gotchas.

I built a small tool to help with this. It uses Claude Code to walk you through a codebase like a senior dev would — asks your background first, then adapts the explanations, tracks

what you've actually understood vs. what you skimmed. It's called Luojz/study-code, on my github. But even without a tool, I think the practice of "post-vibe review" is something

we should be talking about more.

Vibe coding without understanding is just accumulating debt you'll pay in panic later.

Anyone else feeling this? How do you handle it — just keep prompting and hope for the best?

reddit.com
u/Narrow_Fun_8404 — 22 hours ago
▲ 3 r/vibecoding+1 crossposts

I built a minimalist time-blocking tool for my own daily use. no data risk, data stays in your browser.

Why I built this:

I built a time-blocking/time-boxing website for my own personal use which is heavily inspired by timebox.so.

The Privacy benefits:

  • Zero Data Risk: Your data never leaves your machine. Everything is stored in your browser.
  • Export/Import: Since it's local-only, I added a feature to export your data to a file so you can move it or back it up manually.

Link: https://nitish-17.github.io/Timebox/

Source: GitHub Link

nitish-17.github.io
u/EitherComfortable265 — 7 hours ago
Week