u/1TripleRice

This is an [update] post as most of the members requested to keep them updated by sharing what's happening in the audio visualiser app.

This whole thing started because I was generating tracks with gemini music and wanted to upload them properly, but I kept hitting the same problem: making AI music look finished took more effort than making the track itself.

Most visualizer tools I tried either stopped right before being actually useful, had limits, watermarks, weak export, or just felt like they were built for quick demos instead of something you would actually publish.

So I built a small visualizer for myself first.

At that point I genuinely had no plan beyond using it locally for my own uploads.

Then I wrote a post here, shared the early version. From that day https://shimga.app not remained the same.

A few people immediately asked things like:

  • can it compete with existing tools
  • can it support lyrics later
  • why export takes time
  • why mobile layout feels off
  • how it behaves on different devices

So I kept working directly from those comments.

I changed the UI, improved rendering flow, added simple sign-up with Google auth, and later added analytics because at first I had no idea how many people were actually opening it.

What surprised me most was that before analytics, traffic already looked much bigger than expected — probably 700+ unique visitors passed through.

After analytics and sign-up were added, behavior became more interesting: people were staying long enough that it looked like they were either exporting something or seriously exploring.

Now it is averaging around 30+ daily active users, with repeat usage every day, which I honestly did not expect this early.

The project is still free right now, and rendering is still local, so export speed depends a lot on device power.

A few people mentioned that cloud rendering would make sense eventually, and I agree — that is probably the direction if usage keeps growing, maybe as a lightweight pay-as-you-go option later because rendering cost would become real at scale.

What still interests me most is that AI music generation keeps getting easier, but the step after generation — turning it into something publishable fast — still feels strangely underbuilt.

That gap is why this exists.

Did anyone else here hit the same point where generating the music became easy, but packaging it for release still felt slower than it should?

reddit.com
u/1TripleRice — 17 days ago

I was generating tracks with Gemini and needed a clean way to turn them into proper YouTube visuals, but every free tool I tried felt incomplete — watermark, weak export, limited control, outdated UI, or just bad rendering.

There are solid products like , but most serious options are paid, and the free ones usually stop right before they become actually useful.

So I opened and built one for myself. Link

Right now it runs fully offline on my own machine — no cloud, no uploads, no dependency, everything local. Drop audio, tweak visuals, render, done.

Funny part: I built it because I needed it, and now I’m the main user stress-testing it every day while making music.

Still v1, but already smooth enough that I stopped using other tools.

The interesting part is that AI music generation is becoming easy, but the layer after that — making it publishable — still feels underserved.

I’m thinking of buying a domain and releasing it free or donation-based first.

Feels like this could become useful for more people making AI music.

reddit.com
u/1TripleRice — 25 days ago