u/AwareNetJake

▲ 3 r/buildinpublic+1 crossposts

Hey all,

My name is Jake. This is a real post that I wrote out because I see a ton of issues with vibe coded tools all the time. And there are many here.

I think I'm going to start sharing some build-in-public notes from working on my second tool, mostly because I’ve learned a lot from vibe coding and I keep seeing people run into the same traps. 

My goal with this post/this series of posts is to help people who come across it realise that they might have issues with their projects that they don't even know exist.

Quick background: I’m a software engineer with years of work in cybersecurity. I used to work at a very large company building massive security tools with some insanely good engineers. We had to have our tools work fast and scale fast (both vertically and horizontally as we added features).

Then I left to build my main startup in public safety / emergency response.

That startup is still my main thing. But public safety is not really where I wanted to test AI-written code. If something goes wrong there, it is not just “oops, the button is broken.” It matters a bit more. 

So when I started running into the normal founder problem of wearing 19 hats at once, I decided to build a side tool to solve one of my own problems.

Every founder needs a little side action here and there

Basically, I needed faster prospect research for sales and recruiting (still looking for a non-technical co-founder for my main startup if anyone is interested and in the US) and better personalized outreach for sales.

That became SignalReach. (Just posting the link here to show what I've been working on and that what is below here is real)

And because it was not my main company, I figured it was the perfect playground to see how far I could push AI coding.

I’m about 5-6 weeks in now. AI has probably written 80%+ of the app.

My take so far:

AI coding is really cool. So cool, in fact, that AI will build something that looks production-ready while being held together by hopes and dreams. And theres many "founders" that don't even realise they should praying their tool continues to work as expected lol.

Google AI Studio is honestly pretty sick

For a first version or POC, Google AI Studio has been fairly awesome.

If you give it a good PRD, user flows, product goals, rough design direction, and a clear idea of what the app should do, it can get you to a working prototype pretty fast.

The app will probably look like every other AI-coded app with Lucide icons everywhere, beautiful cards, gradient buttons and the "AI Vibe Coded Standard" dashboard that's on every tool now.

It's pretty clear which tools were built with AI. But Lucide icons are sexy and this was a playground. I’m keeping them.

The free credits / free tier situation has also been very founder-friendly so far. Google clearly wants people building AI apps on their stack, and I am happy to let them subsidize my bad decisions and testing. The amount of money Google is ready to give people to just build whatever is pretty crazy. Take advantage of that.

I consider myself an AWS fan boy. But I blew through my AWS credits on my main startup, so that one is now hitting my credit card. Really happy that I can use google credits on my second project. 

But Firebase is where things get interesting

Firebase is great for moving fast.

Auth is easy. Firestore is easy. All the logic being in frontend code is easy (lmao). 

You can get users signing up, saving data, and using your app very quickly.

That ease is also the trap.

The pattern I keep seeing is:

>“Users can read and write their own data once authenticated.”

That sounds reasonable enough.

And for some data, it is. Although, I'd recommend it for no data. Client side logic and access will be the death to many of these "startups"

Especially when people are storing things like this in user-owned docs:

plan
role
creditsRemaining
monthlyUsage
subscriptionStatus
aiStatus
aiLimit
isAdmin

And suddenly the user can write to fields that decide what they are allowed to do.

That is where things get sketchy.

The problem is not always “can Bob read Alice’s data?”

Most people figure that part out.

The problem is:

  • Can Bob update Bob’s own credits?
  • Can Bob mark himself as paid?
  • Can Bob reset his usage?
  • Can Bob create a bunch of AI jobs directly from the browser?
  • Can Bob change a job status and trick your UI into trusting it?

That is the stuff that gets missed. And it is missed ALL THE TIME.

Security by obscurity only goes so far when it's now the first thing I look for when I see a new tool.

I’ve now seen a bunch of small projects and a few real startups fall into some version of this. Not because they are dumb. Because Firebase makes it really easy to build something that feels secure enough when it is not.

Client side write access, even if its as locked down as it can be, can ruin your company.

My rule now for Firebase

The frontend can ask.

The backend decides.

For anything involving billing, credits, usage, roles, AI jobs, exports, subscriptions, or limits, the client should not be the source of truth. And that should extend to literally any mutation of data. 

My safer flow is basically:

client requests action
client checks validation
backend checks auth
backend checks ownership
backend checks plan
backend checks quota
backend checks rate limits
bacnend does sanitization
backend does validation
backend creates the job
backend queues the job
backend updates usage
backend updates the result
client reads the result (I'm letting getter functions slide for now)

Not as sexy as "build me an app that uses AI to find reddit users" (like the 100 of you today that "built" that) and it works right out of the gate. But it is very necessary.

Backend processing of data is the difference between a real tool and giving me access to your credit card.

Stuff I now watch for

Here are the big Firebase / AI Studio traps I’d check before shipping anything real:

  • Firestore rules left in test mode
  • rules that say any logged-in user can write
  • users being able to update their own plan, role, credits, or usage
  • frontend code calling AI APIs directly with exposed keys
  • no rate limits on expensive AI actions
  • AI prompts being stored client side
  • users being able to create AI job docs directly
  • no idempotency, so double-clicks or retries run jobs twice
  • backend functions that trust whatever the client sends
  • model output getting saved without schema validation
  • exports sitting in public storage forever
  • demo mode accidentally using real paid AI calls
  • Im sure theres more, but I'm really just blabbing at this point

The main lesson:

Vibe coding can absolutely get you to a real app faster than anything I’ve used before.

But it can also make the app look way more finished than it actually is.

The UI can be polished. The auth can work. The dashboard can look gorgeous. The AI can generate cool stuff.

And the whole thing can still be quietly trusting the user way too much.

AI can write the code, but it does not own the consequences. You do.

There's a reason developers have taken months to build before AI and a reason why we went to school for this stuff. AI Vibe coding builds the tip of a iceberg. And it's a really sexy and cool looking tip. But there's still so much more that needs to be built and thought of before anyone should launch a tool. 

Anyway, this is post 1 of however many of these I end up writing while building SignalReach. This was meant to be very high level, and I plan to go deeper in the next posts.

Probably will talk about how to build better and more securely next, as well as how to properly implement your AI features so that youre not giving me access to run my AI jobs on your credit card...

Curious if anyone else building with Firebase / AI Studio has seen similar issues or if you built with firebase and didnt realise your tool has these issues.

Also happy to share more on any part since this was so high level, especially auth, Firestore rules, AI job queues, credits, etc.

Some future topics I plan to go into:

  • "Codex (the good, the bad, & the ugly)",
  • "AI prompts and how your customers should get access",
  • "validation, sanitation, AI hijacking, and other security vulnerabilities",
  • "Getters, Setters, data organization, access controls, and more",
  • "The benefits of using PRD driven development for AI coding"

Any other topics you want, lmk, and I can see what I can do.

u/AwareNetJake — 15 days ago