If you've been trying to figure out which hosted AI video model actually wins for your use case, you know the problem: subscribing to all four to compare them costs $80-150 a month minimum, and even then you're running them in four separate tabs with slightly different prompts and slightly different seeds, so the comparison is never really fair.
I spent last weekend trying to do exactly this - compare Kling 3.0, Seedance 2.0, Veo 3.1, and Wan 2.5 on the same input. By the time you set up the third platform you've forgotten what you typed into the first one. So I tried doing it in a node-based tool instead and it actually worked. Here is my most simple example.
The setup looks like this:
Source image → 4 parallel branches → Kling 3.0, Seedance 2.0, Veo 3.1, Wan 2.5 → 2x2 grid output
What I noticed in the comparison: Kling kept the face most natural(thanks for that), Seedance had the smoothest hair physics and looked overall the best, Veo handled rain reflections best(the rest RIP), Wan came out the most stylized. None of them was a clear winner - depended on what you optimized for.
What this isn't: sampler control, latent manipulation, ControlNet at the layer Comfy gives you. If you need that, this isn't the tool. This is closer to Krea Nodes or Flora - pipeline assembly, not parameter-level control.
What this is: the only tool I've found where you can route four hosted video models from one source node and get a real apples-to-apples comparison. Also I was able to generate pictures and build complex workflows.
Still rough in places, I was irritated with UI. Render times vary across models which makes the parallel run feel staggered. No way to lock seeds across different model nodes yet (would be huge for fair comparison).
Would love feedback if anyone here has tried it and opinions on how to use it?
edit: forgot to mention I made this by using Canvas in Higgsfield AI