
▲ 2 r/vibecoding
Started this in November. Just shipped to the iOS App Store this week (Android in closed testing). Wanted to share an honest retrospective with this community since most "AI built my app" posts skip the hard parts.
What is it: Cost Share — Splitwise alternative with AI receipt scanning. Got tired of Splitwise capping me at 3 expenses per day on free tier and paywalling receipt scan. Built what I actually wanted to use.
scan receipt → assign items to people → review & confirm
The honest split:
- AI (Claude) wrote ~70% of the code
- I made every architectural decision, every cutover call, every debugging diagnosis
- AI is a force multiplier, not a replacement for understanding what you're shipping
Stack:
- React Native + Expo (manual builds)
- Supabase (Postgres + RLS + Edge Functions)
- Gemini AI for receipt OCR
- RevenueCat for subscriptions
Where AI actually shined:
- Boilerplate React Native screens (saved weeks)
- SQL migrations and RLS policy drafts
- TypeScript refactors across 50+ files in minutes
- Reading stack traces and proposing fixes
- Edge function logic
Where AI confidently led me astray:
- Performance assumptions that didn't hold up. AI repeatedly told me certain backend patterns were fine. They weren't — when load actually hit, things broke in ways that took real debugging to figure out. Lesson: AI optimizes for "this should work" not "this will scale."
- "It's deployed, it's working" that wasn't. Shipped what I thought were live updates to users for weeks. They never actually reached devices. Took a careful audit to realize the deploy pipeline was misconfigured — AI had set things up confidently without verifying end-to-end delivery.
- Bypassed my own architecture rules. I had explicit project rules documented. AI still drifted from them on certain features. Caught via a code review pass with a different AI tool.
- Half-finished tasks across platforms. Updates that needed to land in multiple files often got applied to one platform and missed another. Caused store submission failures more than once. Now I keep checklists in
CLAUDE.mdfor any cross-platform task.
Lessons after 6 months:
- AI is great at implementing, bad at verifying. Always test in production-like environments.
- Document architecture rules in
CLAUDE.md— AI follows them when reminded - Keep humans in the loop for irreversible stuff (prod DB migrations, version bumps, store submissions)
- Don't trust "this should work" — verify actual behavior end-to-end
- AI-assisted dev is sustainable for serious projects, but only if you're the one driving
What 6 months looks like:
- 30+ DB migrations (RLS, indexes, RPCs)
- AI receipt scanning with item-level assignment
- Push notifications with idempotency guards
- 30 currencies with live exchange rates
- Recurring expenses with timezone-correct scheduling
- Apple + Google signup
- Closed testing → App Store approval
If you're building something serious with AI, happy to answer questions about how I structured the workflow.
Website: cost-share.app
u/Big-Walrus-4479 — 7 days ago