u/Glittering-Flow-3203

Genuinely embarrassing situation. I deliver a product video last week, 5 clips total, all supposed to show same female character holding the product from different angles.

Client message me: "why does she look like 3 different people in this?"

I try same seed, same negative prompt, same detailed character description every clip. Still the face shift slightly, hair color different in two clips, skin tone slightly off in one.

I know this is known problem with AI video but I never find proper solution for client work. It look very unprofessional & I can not keep delivering this.

From what I read, some models handle face stability better than others. But I never do proper side by side test to confirm which one actually work.

Anyone here who solve this for real client deliverables? What is your actual process, not just theory?

reddit.com
u/Glittering-Flow-3203 — 11 days ago

I do video content for ecom brands, mostly product ads & UGC style clips. Been doing this for about a year with AI video tools.

For long time my process was: pick one model, generate, hope for best, deliver. If client don't like, regenerate on different model. Very slow, very expensive.

Few months back I change my workflow completely. Now I use Vosu AI mostly because it lets me run 3 generation tasks at same time across different models. So I can send same prompt to Kling 3.0 Pro, Veo 3.1 & Seedance 2.0 simultaneously & compare real output side by side before I decide what to send client.

This is big change because now comparison is actual & fair, same prompt, same time, different model results. Before I was comparing models across different days & sessions which is not accurate at all.

For product ads Kling 3.0 Pro is giving me best motion consistency right now. For more cinematic look Veo 3.1 output is in different league. Seedance 2.0 I use when face or character stability is needed.

Curious if other people doing similar parallel testing or if you just stick to one model you trust?

reddit.com
u/Glittering-Flow-3203 — 14 days ago

I am using AI video generation for my brand content & product shoots. The output quality is getting really good now but prompt writing honestly takes me longer than expected.

Sometimes I spend 20-30 minutes just trying to describe camera angle, lighting, character movement, background, mood. Then output still misses what I imagine. Then I rewrite. Then regenerate. This loop takes hours.

I see some people post really clean outputs & I always wonder, do they have some system for building prompts or they just naturally good at this?

Any tips how you approach prompt writing for video generation? Specially for product & ecom type content.

reddit.com
u/Glittering-Flow-3203 — 15 days ago