This is an [update] post as most of the members requested to keep them updated by sharing what's happening in the audio visualiser app.
This whole thing started because I was generating tracks with gemini music and wanted to upload them properly, but I kept hitting the same problem: making AI music look finished took more effort than making the track itself.
Most visualizer tools I tried either stopped right before being actually useful, had limits, watermarks, weak export, or just felt like they were built for quick demos instead of something you would actually publish.
So I built a small visualizer for myself first.
At that point I genuinely had no plan beyond using it locally for my own uploads.
Then I wrote a post here, shared the early version. From that day https://shimga.app not remained the same.
A few people immediately asked things like:
- can it compete with existing tools
- can it support lyrics later
- why export takes time
- why mobile layout feels off
- how it behaves on different devices
So I kept working directly from those comments.
I changed the UI, improved rendering flow, added simple sign-up with Google auth, and later added analytics because at first I had no idea how many people were actually opening it.
What surprised me most was that before analytics, traffic already looked much bigger than expected — probably 700+ unique visitors passed through.
After analytics and sign-up were added, behavior became more interesting: people were staying long enough that it looked like they were either exporting something or seriously exploring.
Now it is averaging around 30+ daily active users, with repeat usage every day, which I honestly did not expect this early.
The project is still free right now, and rendering is still local, so export speed depends a lot on device power.
A few people mentioned that cloud rendering would make sense eventually, and I agree — that is probably the direction if usage keeps growing, maybe as a lightweight pay-as-you-go option later because rendering cost would become real at scale.
What still interests me most is that AI music generation keeps getting easier, but the step after generation — turning it into something publishable fast — still feels strangely underbuilt.
That gap is why this exists.
Did anyone else here hit the same point where generating the music became easy, but packaging it for release still felt slower than it should?