r/seedance

Companies are collecting our intelligence to sell back to us in the future.
▲ 11 r/seedance+2 crossposts

Companies are collecting our intelligence to sell back to us in the future.

Companies are collecting our intelligence to sell back to us in the future. So many companies have figured out that humans are the most lucrative product. And theyve been doing everything in their power to collect our thoughts, ideas and behaviors. The data collection is disguised as fun dopamine hits but the end goal is to make us reliant on  social media, search engines and AI tools. So dependent that we will eventually be buying back our own human intelligence. A buddy and I explored this theory in this video and would love to hear your thoughts on it.

youtu.be
u/Responsible-Jury2579 — 1 hour ago
▲ 10 r/seedance+2 crossposts

Seedance 2.0 / skincare routine by Lucia / IGC routine realism

IGC skincare routine by ai-Character: Lucia

Just finished this skincare concept. Focused on getting the overall realism and motion as natural as possible for a brand-ready look.

u/abstrctmulla — 15 hours ago
🔥 Hot ▲ 82 r/seedance+2 crossposts

I’m creating an anime series featuring an Angel and her piglet companion

u/SupperTime — 1 day ago

Generated using seedance 2.0 in Akool ai🦍

Prompt used:
"Ultra realistic underground fight club fight scene No music only raw environment sounds No stabilization fully raw documentary feeling Lighting cinematic dim industrial overhead lamps male looks exactly same as the reference image 0-2 seconds Protagonist grabs opponent A from behind Single hand slam face into rusted metal barricade Betting chips coins scatter outward on impact Barricade scrapes forward heavy metal friction sound A collapses body limp no recovery Ultra realistic underground fight club fight scene continuation same male character as reference image 2-5 seconds Protagonist grabs opponent B quickly Muay Thai teep front kick drives B backward B doubles over exposing neck Clinch grab pulls head down into rising knee strike B face pressed into concrete floor Gradual stabilization half second recovery B lies face down breathing heavy exhaustion Sweat forming flowing clearly down jaw Underground fight club continuation same character reference image 5-6 seconds Vest opens revealing wrapped Muay Thai hand wraps underneath Rope belt loosened slightly displaced C and D enter from rusted side gate They split flanking from both sides Only footsteps crowd murmur heavy breathing sounds Underground fight club Muay Thai brawl continuation same character reference image 6-8 seconds Dual attack chaotic close combat D attacks first rapid aggressive crosses and hooks Balance shifts slight stumble single step C grabs vest pulls backward forcefully Protagonist absorbs hit rolls with impact Crowd noise swells Industrial hanging lamps swaying shadows crossing the floor Underground fight club continuation same character reference image 8-10 seconds Protagonist delivers sharp Muay Thai elbow strike to D temple D staggers holding face retreating away Protagonist catches C collar firmly Pulls into powerful knee strike to ribs Efficient Ong-Bak style minimal wind-up maximum devastation Underground fight club continuation same character reference image 10-12 seconds C slams back into rusted corrugated iron wall Single panel dents inward realistic detail Rust dust explodes golden brown particle cloud Particles floating rotating individually visible Snap back real time instantly C slides down collapsing motionless on concrete Clothing torn clear impact marks Dust continues floating warm industrial light Vest fabric moving natural inertia Underground fight club Muay Thai finale same character reference image 12-14 seconds D rushes forward full desperate speed Protagonist sidesteps pivots delivering Muay Thai spinning elbow Right elbow connects cheekbone full rotation force Sweat arcs visible mid air trajectory D crashes into crowd barrier chairs scatter violently Books objects replaced with crowd stumbling D lies within wreckage motionless defeated Hand reaches forward drops slowly Protagonist heavily breathing barely standing Underground fight club aftermath same character reference image 14-15 seconds Ground level vision blurred edges focus Warm amber industrial tone semi conscious haze Golden rust dust particles floating bokeh effect Protagonist silhouette standing center frame He turns walks toward the dark exit tunnel Vest sways hand wraps hanging loose moving Figure fades into dark cage background distance Fight pit empty silent aftermath remains Golden particles still floating slowly Cut end scene total stillness"

u/Artistic_Culture_873 — 24 hours ago
▲ 33 r/seedance+2 crossposts

Okay, seedance has gone bezerk with realism

Hey peeps! I took advice from my last post -

-

The main issue in my last post was background elements being scarce / didn't flow realistically.

And the voice was robotic.

-

I have fixed that, and I would appreciate feedback on these specific elements of the video.

Thank you very much for the feedback on my last post 📫 :)

u/abstrctmulla — 1 day ago

Seedance 2.0 vs HappyHorse 1.0: Which AI video model actually wins for solo creators?

Alibaba literally just shadow-dropped a video model with a joke name, and it is already causing a massive headache for ByteDance.

If you have been looking at the Artificial Analysis leaderboard this week, you probably noticed something weird. Sitting right at the #1 spot for both text-to-video and image-to-video isn't Sora. It isn't Kling 3.0. And it definitely isn't Seedance 2.0.

It is "HappyHorse-1.0."

No branding, no massive PR campaign, no bloated keynote presentation. Just a blind submission that absolutely steamrolled the competition in ELO ratings. Then Alibaba quietly claimed it. Specifically, the Taotian team led by Zhang Di, who used to be heavily involved with Kling over at Kuaishou. It was a brilliant flex. Pure quality judgment, zero brand bias.

But here is the actual question we need to answer right now: if you are a solo creator, an indie filmmaker, or just someone trying to build a viable AI video workflow on your local machine, which of these two models actually matters to you? Because the raw benchmark numbers are hiding a much messier reality.

Let’s talk about Seedance 2.0 first. Or, as ByteDance just officially rebranded it to capture the hype, "Dreamina Seedance 2.0."

Seedance was the undisputed reference point for AI video generation up until about five minutes ago. And honestly, if you are looking for pure, unadulterated physical accuracy right this second, it still is. I was looking at some side-by-side comparisons yesterday, and the difference in how these models handle complex physics is glaring.

When you ask Seedance 2.0 to generate a dog eating, the mouth movements actually map to the mechanics of a real jaw. If you generate a toaster popping, the physical movement makes sense. It understands object permanence and spatial relationships in a way that feels grounded. Plus, the moment you add audio synchronization into the mix, Seedance immediately reclaims its crown. The lip-sync capabilities are just tighter.

ByteDance knows they are feeling the heat, too. Suddenly, after months of gating access, the Seedance 2.0 API is officially out for global use through providers like AtlasCloud. Funny how a random horse model hitting #1 accelerates a massive corporate product roadmap, right?

Now, let’s look at HappyHorse 1.0.

Why did a model with noticeable visual breakdowns manage to beat Seedance in a blind human evaluation? Two words: Prompt adherence.

HappyHorse is doing things with multi-shot generation and complex prompt following that we haven't really seen outside of highly controlled, cherry-picked studio demos. If you give it a dense, multi-layered prompt, it actually listens to the constraints instead of just hallucinating a pretty, generic cinematic pan.

When you look closely at the Artificial Analysis benchmark, the real story is in the margins. HappyHorse beat Seedance in image-to-video by exactly three ELO points. That is basically a technical tie. But the fact that an unbranded, zero-hype model pulled that off on its first try is insane.

But the real reason HappyHorse is breaking the internet isn't just the blind test results. It’s the open-source rumors.

The word going around right now is that HappyHorse is going to be open weights. We are talking about a 15B parameter model that can supposedly do 1080p generation in just 8 denoise steps. Let that sink in for a second. Duration is currently capped at around 5 seconds, which hurts, but if the community gets their hands on the weights? We are looking at a fundamental shift in how AI video is produced locally.

Think about the current state of local video generation. We’ve been stuck trying to squeeze blood from a stone with models like Wan2.2, waiting for LTX 2.3 to finally catch up in prompt understanding. A 15B open-weight model that natively understands complex prompts and hits #1 on global leaderboards changes the math entirely. It means solo creators won't be entirely dependent on paying per-generation API costs to ByteDance or OpenAI. You could theoretically run this, fine-tune it, build custom LoRAs for character consistency, and integrate it directly into ComfyUI workflows without asking for permission.

So, who actually wins for the solo creator?

Right now, today? It is Seedance 2.0.

It is accessible through Dreamina, it is deeply integrated into CapCut, and the character consistency features they are rolling out are genuinely fun and frictionless to use. If you need to produce a short ad or a social media clip by Friday, you use Seedance. The physics won't embarrass you, and the audio sync will save you hours in post-production.

But if you are looking at the next six months? HappyHorse is the one you need to watch.

If Alibaba actually drops the weights, HappyHorse won't just be a tool you use; it will be an ecosystem you build on. The visual breakdowns—the weird artifacts when a scene gets too busy—will be patched by the open-source community in weeks. The 5-second limit will be brute-forced or worked around with temporal extensions.

We are watching the classic Apple vs. Android war play out in real-time, but for AI video. ByteDance is building the perfectly polished, closed-garden consumer product. Alibaba is threatening to drop a nuclear open-source bomb on the entire market just to disrupt the space.

I’m curious how everyone else is reading this shift. Are we calling the death of closed-source video models too early, or is a 15B open-weight model actually enough to permanently kill the API subscription business model for video?

reddit.com
u/TroyHay6677 — 1 day ago

A sweet dinner in Italy

Seedance 2 prompt was simply "A couple have a romantic dinner together with some flirting, ending in a kiss.". I provided 2 images one at the beginning and one for the ending kiss.

u/stencyl_moderator — 1 day ago
▲ 3 r/seedance+1 crossposts

Focused on imperfections in this ugc ai content piece, any feedback appreciated

Any feedback is welcome 🙏🏻

I'm looking for other viewers' opinions on the videos realism and if any ai tell 🤔

u/abstrctmulla — 3 days ago
▲ 16 r/seedance+3 crossposts

Dreamina - Am I the only one having this issue ?

Hey,

I’m running into a really frustrating issue with Dreamina (Seedance 2.0).

I bought credits to generate videos, but for the past few days my account has basically been unusable. Every generation fails instantly with a generic “something went wrong, please refresh” error
My credits are still there. I know that some friends (in the same city) have access, but not me.

So it really looks like my account specifically got blocked for no reason.

There’s no real support, no explanation, nothing. Meanwhile my credits (worth a few hundred euros) are just sitting there and could expire.

Is anyone else experiencing this?
Is this some kind of hidden rate limit or account flag?

Honestly feels like random account blocking on a paid service, which is pretty messed up.

reddit.com
u/streapland — 3 days ago

ModelArk - Not what u think :(

Hi there, so ive been looking for cheaper alternatives to use seedance2.0.

And i soon realized that there is nothing cheaper than the official API.

Luckily im in one of the whitelisted countries.

So i soon bought the "Resource Pack" of seedance2.0 for 30$. (to enable the pay-as-you-go) feature.
Little did i know that when using ModelArk you can't upload photos of real humans (even if they are AI generated) - AT ALL!

And by that the 30$ just got thrown out of the window.

Writing this post so noone else will do the same mistake as i did.

Just to mak sure im not missing anything, if anybody knows a way to be able to do it - please comment. I know there is the Real-Human Assets section that requires KYC from you but i think it will only allow you to upload real-human assets that look like you.

thanks for reading this far <3

reddit.com
u/Quick_Ad3358 — 3 days ago

My dream project (which would have otherwise been impossible without AI revolution) is made possible. It’s a teaser made with Kling 3.0 more than a month ago cause sadly seedance wasn’t publically available at that point. It’s a teaser not a TRAILER so judge accordingly.

u/Practical-Worker-430 — 3 days ago

Rate IGC - Confident Man In Prague

Meet Victor - he would like you to rate his realism while he's everywhere anytime, apparently

Thanks to anyone who does give feedback 💯

u/abstrctmulla — 3 days ago

Seedance bites back ;p

All jokes aside, I did try to improve and take any criticism/feedback I got in my last post. Please be honest and kind 😇 .

u/abstrctmulla — 4 days ago
▲ 16 r/seedance+1 crossposts

90% bypass rate on Seedance 2.0 face detection - the sketch method completely replaced the grid overlay for me

tl;dr: Instead of grid overlays, convert your photo to a fashion sketch first. No artifacts or workarounds needed, 90% success rate. Sharing because I wasted too many credits before figuring this out and want to help others.

Shoutout to the post about the grid overlay method, that actually worked and helped me a lot when I first started. But the grid artifacts in the output kept bugging me so I kept experimenting and found something cleaner and less buggy.

The sketch method

Instead of overlaying grids on your photo, convert it to a fashion sketch first using any image gen tool. The detector doesn't flag sketches as real faces, but Seedance still reads the character features well enough to generate realistic video from it.

Prompt I use:

"Create a fashion sketch illustration of the person from [your photo]. Three views side by side: full body front, medium portrait, full body three-quarter. Style: loose expressive pencil lines with selective watercolor fill. Cool grey-blue palette. White paper space as negative. Skin, light pencil with minimal hatching. Hair, sharp dark ink lines. Keep facial features, hairstyle and all distinguishing features exactly as in the original."

Upload the sketch as your Seedance reference. No grid artifacts, character stays recognizable.

Soul Cast as second option

If you're on Higgsfield you can also generate a character in Soul, use it in Soul Cast, then bring it into Cinema Studio with Seedance 2.0. Success rate is almost 100%, as it is an internal feature of the system.

When I still use the grid

Quick drafts where I don't care about artifacts. The grid overlay from that other post is still the fastest method, takes 10 seconds. But for anything client-facing or polished, sketch method wins.

What didn't work for me on top of what was already mentioned

  • Style transfer filters, Seedance interprets the filter as your intended style direction
  • Adding accessories only (sunglasses, hats), sometimes passes but unreliable

Tips

  • Always draft in Fast mode first, half credits, seems to pass more often
  • Describe the character in text prompt too, gives the model two anchors
  • Front-facing reference images give best consistency across scenes

Went from ~30% success rate fighting the filter to ~90% with the sketch method. Hope this helps someone.

reddit.com
u/R3tR0_- — 3 days ago

Fox McCloud and Krystal in a DeLorean outrunning a massive Earthquake.

Fox McCloud is driving a DeLorean with Krystal on the passenger side, and a lion on the backseat. They are outrunning a massive earthquake in the city.

Note: I didn't prompt the DeLorean. Seedance just generated it for some reason.

u/TigerClaw305 — 2 days ago

Seedance 2.0 generated 200+ videos: This AI UGC workflow left me completely speechless

UGC agencies are about to have a very bad year. I’ve been tracking the Seedance 2.0 rollout—now officially rebranded as Dreamina Seedance 2.0—and the leap from 'cool demo' to 'industrial-scale production' is jarring. We aren't just talking about generating a pretty 5-second clip of a cat in space anymore. We’re talking about a workflow that shits out 200+ ad variations in a single run.

I spent the morning digging through the latest tests coming out of Higgsfield and Dreamina, and the technical shift here is subtle but massive. Most people think AI video is just 'prompt and pray.' Seedance 2.0 changes the logic. It’s moving toward a 'Reference-Based' architecture. You aren't just typing words; you’re feeding it three distinct anchors: a specific person (consistency), a specific location, and a specific product. It tags them and composites them into a scene with a level of control that makes Sora look like a toy for filmmakers who don't have deadlines.

One specific feature caught me off guard: the VFX overlay on raw footage. You can take a shaky video shot on an iPhone, upload it, and tell Seedance to add specific visual effects. It keeps your original performance intact—no green screen, no manual rotoscoping, no complex compositing in After Effects. It just wraps the AI layer over the human movement. For anyone doing TikTok ads or '邪修' (the 'dark arts' of e-commerce scale), this is the Holy Grail.

The '200 videos' claim isn't hyperbole. When you combine this with tools like TopView or Claude-based ad producers, you’re looking at a pipeline where a single human can generate a month’s worth of high-converting content before lunch. The lip-sync is finally crossing the uncanny valley, and the consistency—especially in fashion and cinematic scenes—is actually usable for brands that give a damn about their image.

This model came out of China and just hit the US market, and frankly, the Western alternatives feel behind on the 'utility' side of things. While others focus on making 'art,' this is focused on making money. It’s designed for the person who needs to sell a product on TikTok Shop or Amazon and needs 50 different hooks to test against an algorithm.

If you’re still paying $500 to a 'creator' for a single UGC video that might flop, you’re playing a losing game. The barrier to entry for high-end video production just hit zero.

What happens to the creator economy when the 'creator' is just a reference photo and a Seedance prompt? Are we actually ready for the sheer volume of high-quality garbage that's about to hit our feeds?

reddit.com
u/TroyNoah6677 — 3 days ago

I tested 3 AI video models back-to-back: Why Seedance 2.0 feels like someone finally built Photoshop for video

We’ve been looking at AI video entirely wrong. For the past year, everyone from Hollywood directors to tech bros has been treating text-to-video like a slot machine. You type "cyberpunk city, neon, rain, 4k" into Kling or Sora, pull the lever, and pray the physics engine doesn't decide to turn the protagonist's legs into spaghetti halfway through the camera pan.

I just spent the last week running three of the top-tier models back-to-back. Kling AI, Google's Veo, and the newly rebranded Dreamina Seedance 2.0. And the gap between them isn't about resolution or how pretty the pixels look. It's about control. Seedance 2.0 fundamentally shifts the paradigm from "generating" a video to "building" a scene. It honestly feels like someone finally figured out how to map the Photoshop UX to a temporal latent space.

Let me break down exactly why this Chinese AI model is quietly eating the lunch of every major Western lab right now.

First, let's talk about the multi-shot consistency problem. Most people focus on 5-second short clips on Twitter. Wow, a realistic dog. Cool. But the real test of filmmaking is putting two shots together. You shoot a wide establishing shot, then you cut to a medium close-up. In Sora or Kling, doing this is an absolute nightmare. The system treats every prompt as a blank slate. Your character wearing a blue jacket in shot A suddenly has a blue vest with an extra zipper in shot B.

Seedance completely bypasses this by treating multi-shot sequences as a unified system. When you use it inside environments like Higgsfield or Pollo AI, you aren't just typing a prompt. You are uploading up to 12 reference images at once. Think about what that actually means for a workflow. You aren't just giving it a starting frame. You are feeding it character sheets, lighting references, mood boards, and environment layouts simultaneously.

This is where the Photoshop comparison solidifies. When I open Photoshop, I don't just mash a button and get a finished poster. I bring in a background layer. I mask out a subject. I apply an adjustment layer to match the color grade. Seedance 2.0 is starting to offer these exact types of levers for video. Inside Pollo AI, I was taking a single static image and building a full cinematic moment out of it, dictating the exact camera motion and the weight of the objects in the scene. The detail in the motion actually respects gravity.

Let’s look at Kling for a second. Kling is great at dynamic motion. If you want a car crashing through a wall, Kling will give you a spectacular explosion of bricks. But try telling Kling to make the driver step out of that specific car, wearing the exact same clothes they were wearing in the interior shot. The model falls apart. It doesn't understand the semantic link between the inside of the car and the outside.

Veo has a different problem. From what I’ve seen, Google's model produces incredible, highly detailed textures. The skin pores, the fabric weaves—it’s stunning. But it feels heavily constrained, almost like it’s too afraid to make bold camera moves because it knows the illusion will break. It’s a beautifully rendered straightjacket.

Seedance 2.0 hits the sweet spot. It allows for the dynamic, sweeping camera moves of Kling, but retains the texture permanence that Veo strives for. And it does this by fundamentally changing the input mechanism. The fact that Higgsfield allows you to pump 12 reference images into the Seedance model changes the entire math of the generation. You are heavily constraining the latent space before the first frame is even hallucinated.

Think about traditional 3D rendering. You have an environment map, an albedo map, a roughness map. We aren't quite there yet with AI video, but this multi-image input method is the closest thing to it. You are basically giving the AI a texture atlas to pull from. That’s why it doesn’t hallucinate a new jacket zipper halfway through the shot—it’s constantly referencing the strict visual boundaries you set in the multi-image prompt.

The official name is now Dreamina Seedance 2.0—a bit of a mouthful—but the integrations are what matter. It's live on SYNTX, Higgsfield, and a bunch of other platforms, and the results are terrifyingly good. I watched a breakdown of a Ben 10 Ghostfreak transformation sequence someone built on TikTok, and the kinetic weight of the animation didn't have that typical AI "floaty" look. It snapped. It had actual post-production value sitting behind a single cinematic moment that didn't require a crew of hundreds.

And here is the reality check for the OpenAI Sora team: while they are busy selectively dropping curated, pre-rendered short films to hype up their eventual release, Seedance is out here natively integrating into actual production workflows. It’s available. VFX artists are using it right now.

The ability to maintain character consistency across a sequence changes everything. If I have a scene with two people talking at a diner, I can lock their visual identities into the model's context window. Seedance seems to hold onto these temporal features much better than its competitors. It’s not flawless—you still get the occasional weird artifact if the camera moves too fast across a complex background—but it feels like an actual tool rather than a novelty toy.

I didn’t just generate a scene this week. I built it. Layer by layer, reference by reference. And that is a terrifying leap forward for the industry.

If you’ve been messing around with the Dreamina access or the Pollo AI integration, I’m curious—how are you handling the transition between extreme wide shots and macro close-ups? Does the character embedding hold up for you on sequences longer than 15 seconds, or are you still having to heavily mask and comp in After Effects to hide the seams? Let's discuss.

reddit.com
u/TroyHay6677 — 4 days ago