u/markyonolan

I got so tired of AWS IAM permissions that I decided to compete with a tech giant. Here is my S3 alternative for temporary files
🔥 Hot ▲ 115 r/indiehackersindia+2 crossposts

I got so tired of AWS IAM permissions that I decided to compete with a tech giant. Here is my S3 alternative for temporary files

Let me start by stating the obvious: trying to compete with Amazon S3 is objectively a terrible idea.

If you need to store terabytes of permanent, archival data, S3 is a miracle. But if you are just trying to route temporary files in a new app or automation workflow, it is a bloated nightmare.

Whenever I build any automation workflow or any app that requires a simple storage with public URL, I get stuck.

The default advice is always "just throw it in S3." But doing that instantly kills my shipping momentum:

  • I have to create a bucket and carefully disable public access blocks
  • I have to write custom JSON IAM policies just to avoid 403 errors
  • I have to configure CORS so the frontend doesn't crash
  • I have to set up a CloudFront distribution

The "Lazy" Alternative

I got so tired of wasting time on cloud infrastructure instead of building product features that I built a bypass called Upload to URL.

It does significantly less than AWS, but it does exactly what some devs actually need:

  1. Simple input: Send a POST request (or use the native Zapier/Make/n8n nodes).
  2. Instant delivery: Get a clean, 100% public CDN link back in under 2 seconds.
  3. Auto-cleanup: Set an expiry (1, 7, or 30 days). Once the time is up, the file self-destructs automatically.

You just generate the link, pass it to your API, and let it delete itself tomorrow.

I know building a micro-SaaS to take on Amazon sounds ridiculous, but I refuse to believe I'm the only founder who despises configuring infrastructure just to route temporary files.

Tell me in the comments if this is a worthy tool that I should be building.

u/markyonolan — 19 hours ago
▲ 6 r/nocode

I built a simple S3 alternative for no-code devs who refuse to learn AWS just to get a public file URL

Let me know if this sounds familiar:

You’re building a workflow in Make, Zapier, or n8n. Everything is connecting perfectly until you try to pass an image to an AI module, or push a file to a CRM.

Suddenly, it throws an error. It doesn't want the actual file - it strictly wants a Public URL.

  • Drive and Dropbox links get rejected (they aren't true raw public URLs)
  • The forums tell you to "just use S3."
  • Two hours later, you’re drowning in AWS Bucket Policies and CORS configs just to host a single JPEG.

And even if you get it working, you have to build annoying "delay and delete" loops in your automation, or your storage slowly fills up with thousands of useless workflow files.

The "Lazy" Fix

I got so frustrated with this bottleneck that I built a bypass called Upload to URL.

It’s an S3 alternative that requires zero cloud infrastructure knowledge. It just does the one thing we actually need:

  1. Drop your file into the native Make, Zapier, Pipedream, or n8n module.
  2. Get a clean, raw, 100% public CDN link in under 2 seconds.
  3. Auto-cleanup: You set an expiry (1, 7, or 30 days). Once time is up, the file self-destructs.

How are the rest of you handling temporary file routing right now?

reddit.com
u/markyonolan — 21 hours ago

I never thought a single custom node would save me from so many cloud storage headaches :)

Ok here is the backstory...

I build a lot of automations, especially multi-agent setups and workflows that require passing files to AI vision models (like OpenAI) or sending drafts for client review.

But here are the two major problems I kept hitting whenever I had to handle binary data:

  1. Permanent storage trap. To pass a file via API, you usually need a public URL. This meant I was constantly setting up AWS S3 buckets, dealing with annoying IAM permissions, or cluttering up my Google Drive. It was complete overkill just to hold a file for 5 minutes during a workflow execution, and I constantly had to do manual "garbage collection" to clean up my storage.
  2. The 1-hour expiry bottleneck. I tried using free temporary hosts (like tmpfiles or ImgBB) to bypass S3. But ImgBB only does images, and tools like tmpfiles force a hard 1-hour expiry. If my workflow had a delay, or if I needed a human to approve an AI-generated asset in Slack, the link was dead by the time they clicked it, and the workflow failed.

So I sat down and asked myself what simple thing I could build to solve both issues in one shot.

I built a lazy, frictionless community node (now verified) called Upload to URL. It simply takes any binary data in your n8n workflow and instantly converts it into a public CDN link.

https://preview.redd.it/ll25mg8riuwg1.png?width=2906&format=png&auto=webp&s=712514a2c84799d8480bd54b12b1f0e6c028c1af

But the best part is the garbage collection is handled for you: you can set the URL to self-destruct after 1 day, 7 days, or 30 days.

And trust me.... things improved a lot. My workflows are incredibly clean now without the AWS permission headaches.

I’m genuinely happy that I’m using my skills to fix my own systems too.

For people who prefer simplicity over complexity, you can try the 'Upload to URL' verified node.

You can suggest I may just use standard Google Drive node and delete the file at the end of the workflow. But if a workflow fails halfway through, that delete node never triggers, and you are left with junk data forever. It's just a simple, frictionless pattern I follow to keep things clean.

Thanks for reading.

reddit.com
u/markyonolan — 24 hours ago
▲ 1 r/SaaS

Vibe coding gets you to launch, but it doesn't prepare you for the real market test. I had to switch to Claude Code and learn DevOps overnight just to keep my app online

The AI-builder honeymoon phase is real, but it ends the exact second you hit production.

I recently launched a file hosting API. I used the "vibe coding" approach heavily to get the MVP out the door. It was fast, it was fun, and it gave me a false sense of security. I thought the heavy lifting was done.

Then the bot nets arrived.

Almost overnight, the app started crashing under malicious traffic. I quickly realized that AI is amazing at generating standard components, but it won't jump in and secure your servers during an active attack. You can't "vibe" your way out of a melting infrastructure.

I had to put my head down, switch to using Claude Code strictly for deep infrastructure refactoring, and teach myself Cloudflare security rules on the fly so I wouldn't lock out my actual users.

The Marketing Accident

The silver lining of this DevOps nightmare was that I had to completely abandon my marketing efforts for 14 days. I didn't send a single outreach message.

To my surprise, those two weeks brought in my highest signup numbers ever, pushing us to 6 paid subscribers (100% organic).

https://preview.redd.it/7hj81n6ccawg1.png?width=1080&format=png&auto=webp&s=2392699ad1f088fdf6dfaf106591e126d20633fd

When I looked at the data, my manual promotion was doing nothing. The users actually converting were finding me through Search and ChatGPT recommending my tool for their automation workflows.

If you are building an MVP with AI right now, take the head start.

It’s an incredible advantage. But be prepared for the moment when the "vibe" stops and the actual operations begin. You still have to run the servers, and you still have to figure out where your real users are coming from.

reddit.com
u/markyonolan — 4 days ago

Vibe coding gets you to launch, but it doesn't prepare you for the real market test. I had to switch to Claude Code and learn DevOps overnight just to keep my app online

People talk a lot about shipping apps in a weekend with AI.

Launching is the easy part. No one warns you about what happens when real traffic actually hits.

I built my Micro-SaaS mostly by vibe coding.

Honestly - when I launched it - I felt like a Genius :)

But, my feeling was short-lived when reality hit.

The Bot Attacks & Reality Check

As soon as the app got some traction, bot nets hammered it. The whole thing went down completely, multiple times.

You can't just prompt your way out of a server attack. Vibe coding is great for building UI and standard features, but it falls apart when you need to handle server security under fire. I had to pivot hard:

  • I stopped basic prompting and switched to Claude Code as a dedicated coding agent to handle the heavy refactoring.
  • I got thrown into an overnight crash course on DevOps. I had to learn Cloudflare rules on the fly to block malicious bots without locking out my actual users or breaking the API.

The "No Promo" Accident

While I was busy putting out server fires, I didn't do a single bit of marketing or outreach for two full weeks. I figured my signups would drop significantly.

Instead, I hit my highest weekly signup count ever.

https://preview.redd.it/2ivqti0ks9wg1.png?width=1444&format=png&auto=webp&s=56dfded00281a62e7a6c07905c5490b024c9329b

It turned out my manual promotions weren't moving the needle as much. My users were already finding me organically through:

  • Google Search
  • Niche communities
  • ChatGPT/Gemini

Where things stand today

The infrastructure is finally stable enough to handle the volume, free users are highly active, and I have 6 paid subscribers. Every single one of them came in 100% organically.

My takeaways for anyone relying on AI to code:

  • Launch is day zero: AI gets you to the starting line faster, but you still have to run the race.
  • You can't skip the hard stuff: You will eventually have to learn how your infrastructure actually works when bad actors find your app.
  • Test your organic baseline: Stop your marketing for a week and see what happens. You might be wasting hours on promo when your users are already looking for you on search engines and AI tools.

Has anyone else hit this wall moving from "AI is magic" to "There's still so much to be done"?

reddit.com
u/markyonolan — 4 days ago

The hidden cost of shipping a SaaS in 3 days: My saas app couldn't handle the bot swarm

Everyone is talking about how fast you can build right now.

And it's true - I recently shipped the first version of a new micro-SaaS in basically three days using Claude Code. The speed is crazy. Honestly - I felt like a coding genius ;P

But nobody talks about what happens when you decide to launch on a server.

After a ton of effort, I took the approach of launching it on AWS EC2

I didn't get a flood of users. I got a flood of automated vulnerability scanners probing for .env files and open ports. Within hours, my CPU was at 93%.

https://preview.redd.it/lejdrf6mirvg1.png?width=2472&format=png&auto=webp&s=d35e6a4c7a28d38d6a45834a5c85390a5ab28061

When you build at lightspeed, you end up cutting corners on the boring stuff. My front door was wide open, and the bots were eating up my database reads.

The "Ship Fast" checklist they don't tell you about:

  1. Never point DNS to a raw IP. Route it through a proxy immediately.
  2. Hard-lock your SSH access to your home IP in your security groups on day zero.
  3. Your database choice matters the second you get unpredictable traffic spikes (even if it's just bots).

Shipping fast is great, but repairing a plane while it's nose-diving is incredibly stressful.

Best wishes Vibe Coders :)

reddit.com
u/markyonolan — 6 days ago