
I got so tired of AWS IAM permissions that I decided to compete with a tech giant. Here is my S3 alternative for temporary files
Let me start by stating the obvious: trying to compete with Amazon S3 is objectively a terrible idea.
If you need to store terabytes of permanent, archival data, S3 is a miracle. But if you are just trying to route temporary files in a new app or automation workflow, it is a bloated nightmare.
Whenever I build any automation workflow or any app that requires a simple storage with public URL, I get stuck.
The default advice is always "just throw it in S3." But doing that instantly kills my shipping momentum:
- I have to create a bucket and carefully disable public access blocks
- I have to write custom JSON IAM policies just to avoid 403 errors
- I have to configure CORS so the frontend doesn't crash
- I have to set up a CloudFront distribution
The "Lazy" Alternative
I got so tired of wasting time on cloud infrastructure instead of building product features that I built a bypass called Upload to URL.
It does significantly less than AWS, but it does exactly what some devs actually need:
- Simple input: Send a POST request (or use the native Zapier/Make/n8n nodes).
- Instant delivery: Get a clean, 100% public CDN link back in under 2 seconds.
- Auto-cleanup: Set an expiry (1, 7, or 30 days). Once the time is up, the file self-destructs automatically.
You just generate the link, pass it to your API, and let it delete itself tomorrow.
I know building a micro-SaaS to take on Amazon sounds ridiculous, but I refuse to believe I'm the only founder who despises configuring infrastructure just to route temporary files.
Tell me in the comments if this is a worthy tool that I should be building.