Udio Remixes don’t actually preserve your song… right?
Does anything like this actually exist yet in Udio or are we still waiting for better updates/models?
I’m trying to figure out if there’s a way to change the instrumentation or genre of a song without heavily changing the melody itself. Like keeping the same core song identity, vocals, hooks, and structure, but just swapping the production style around it.
From what I’ve noticed, lower variance settings are supposed to preserve more of the original structure and composition, but even then it still feels like when I remix something, the melody, vocal phrasing, or overall feel of the song can shift quite a bit. Sometimes it works really well, but other times it ends up feeling almost like a completely different track.
My main goal with this is honestly just to experiment and hear how different melodies would sound across different genres. It might not even sound good in some cases, but that’s kind of the point for me, just exploring what happens when you push ideas in different directions.
I’m not talking about Extensions either, since I already know you can continue a track and shift genre that way.
So I was wondering if there are actually any workflows, settings, or prompting tricks people use to preserve melodies more reliably, or if current AI music models aren’t really capable yet of cleanly separating composition from production in that way.
Basically, is there a proper way to do this already, or are we still waiting for future updates/models to make this more controllable?
One workaround I was thinking about: would recording my own voice singing the melody and uploading it help? Actually performing the melody myself so the model has a clearer reference instead of having to infer it from text.