u/flatrive

Style consistency breaks at clip 8+ in AI video batches

Running 12–15 clip batches for ad creative using Kling and Veo. Around clip 7–8, the aesthetic drifts — color grading shifts, lighting temperature changes, motion feel stops matching earlier clips despite identical prompts.

What I've tried:

Locking seeds (unreliable — depends on whether the model actually supports seed spec)

Splitting into smaller batches of 6 — drift still happens, just later

Phygital+ (AI video tool built for ads/marketing) — first 6 clips held tighter visually, but tail end still wandered past clip 10

My best guess is the style reference gets reinterpreted when model context resets mid-batch. But I'm also not sure Kling or Veo actually support true style chaining in no-code pipelines, so this might partly be a setup issue on my end.

What I want to know:

Is this a known ceiling with chained video pipelines?

Is there a way to aggressively anchor style reference past the halfway point?

Any fixes or workarounds appreciated.

reddit.com
u/flatrive — 2 days ago

general-purpose bot vs specialized tool - what actually changes when you build each

been thinking about this a lot lately because I keep bouncing between the two depending on the project. built a general-purpose assistant a while back and it was honestly pretty fast to get running, but the moment someone wanted it to do something specific to their industry it started falling apart. hallucinations on domain-specific stuff, wrong terminology, weird edge cases it just couldn't handle. switched to building something more focused for a doc processing use case and the accuracy, difference was pretty noticeable once I got the right data and the right rules in there. the tradeoff I keep running into is setup time vs long-term reliability. general bots you can prototype in a day but they feel a bit fragile in production. specialized ones take longer to scope and build but clients actually trust the output more, and that trust gap is apparently pretty significant, some benchmarks are putting specialized tools 20-40% ahead of generalists on accuracy for enterprise tasks, which honestly tracks with what I've been seeing firsthand. also noticing this is getting more loaded in regulated spaces. healthcare and finance clients especially are asking harder questions about auditability and compliance now that EU AI Act enforcement is tightening around specialized models. so the build decision isn't just a performance call anymore, it's a paper trail call too. and the integration piece matters heaps more with specialized tools, like you can't just dump, outputs into a spreadsheet, it has to connect to whatever system the industry already runs on. that scoping conversation alone adds time before you write a single line. curious if others have found a point where fine-tuning a general model actually gets you, close enough to purpose-built, or if that's mostly a shortcut that catches up with you later. feels like the answer changes depending on how niche the domain is and how much the client cares about explainability.

reddit.com
u/flatrive — 3 days ago

what does AI video actually cost when you factor in everything

been going down a rabbit hole on this lately because a client asked me to put together a rough budget comparison for their product videos. the per-minute pricing looks amazing on paper at first glance, but once you factor in revision rounds, prompt iteration time, and the fact that, your usable output rate can be closer to 1-in-6 or 1-in-10 on a good day, the real cost per finished asset creeps up fast. like, Kling is sitting around $0.07/sec for 1080p clips right now, which sounds cheap until you're burning through credits on failed generations. Veo 3 is the one people keep citing for cinematic quality, but if you're accounting for the, outputs that don't make it, some folks are reporting $600+ to get five minutes of actually usable footage. that's not a bargain, that's just a different kind of production budget. the subscription tools like HeyGen or Synthesia still make the most sense if you're doing real volume, like ongoing ad creative where you need 20+ variations a month. the math works there. for a one-off hero video it usually doesn't, especially once you hit credit limits mid-revision cycle. what I'm trying to figure out is how people are actually accounting for this in a marketing budget. do you treat it like a software cost or a production cost? because those sit in completely different line items and get scrutinized differently by finance. and has anyone had to justify the quality tradeoff to a client or stakeholder who either thinks AI video is basically free or thinks it's a brand liability? curious how those conversations are going for people actually doing this at scale.

reddit.com
u/flatrive — 4 days ago

been putting more time into nTop lately for some denser geometry work and the implicit modeling stuff genuinely holds up well for lattices and TPMS structures. the boolean ops don't fail on complex implicits the way traditional CAD does, which is a real thing once you've lost an hour to a mesh error mid-session. no argument there. but the part that keeps catching me is what happens when you try to get those complex geometries out the other end. high surface area parts can still generate pretty hefty mesh files depending on how you're exporting. Simplify Mesh by Threshold is still in the toolkit but it's trial-and-error in a way that gets old fast. that said, the Implicit Body by Voxel Grid block has been genuinely useful for this - you get tighter file sizes with, deviation tolerances you can actually control, which is a step up from just hoping the simplify pass doesn't wreck your surface quality downstream. if you're not using that yet it's worth a look before you go deep on the threshold approach. also been looking at Field Optimization and while the point-wise lattice optimization is interesting, the lack of overhang or, extrusion constraints feels like a gap if you're trying to close the loop on actual printability rather than just shape. that part still feels workflow-specific in a way that requires a lot of manual patching. curious whether others are hitting the same walls or found cleaner ways through it. specifically around managing file size and surface fidelity on anything with real geometric complexity before it gets to simulation or manufacturing.

reddit.com
u/flatrive — 6 days ago

curious what people here would consider genuinely workflow-changing vs just convenience stuff. asking because I've been going deep on automating the creative side of things lately, specifically around asset production, content repurposing, and, cross-platform delivery, and the gap between "nice to have" and "can't imagine working without this" is way bigger than I expected. for me the one that actually stuck was automating the resize and reformat pipeline for assets across different placements. sounds boring but it was eating maybe 2-3 hours a week of just dumb, repetitive, work, and once that was gone I stopped dreading the production side of projects entirely. the other one was setting up a trigger-based approval flow instead of chasing people through slack and email. that one probably saves more mental energy than actual time, which honestly matters more at this point. what's interesting to me is how both of those changed the shape of the work, not just the speed. the resize pipeline meant I stopped context-switching into production mode mid-creative-flow. the approval flow meant decisions actually moved instead of dying in someone's inbox. neither is flashy but both are load-bearing. what's the automation you'd actually miss if it disappeared tomorrow? not the stuff you demo to people, just the one that quietly changed how your day runs.

reddit.com
u/flatrive — 9 days ago

been going deeper into generative design for AM lately and the same issues keep coming up. stair-stepping artifacts, excessive support material, surface quality on self-supporting structures that just looks rough. Fusion 360's generative design explorer does topology optimization and has overhang minimization baked in, which helps somewhat, but I'd be cautious about claiming any specific, "newer solver" is meaningfully ahead on support reduction, the geometry-first output problem still feels pretty real from what I've seen and what people are reporting. MSC ApexGD is another one that comes up for stress-based strut optimization, though I haven't put serious time into it either. the bigger frustration is that most of these tools still treat geometry as the primary, deliverable and don't account for anisotropy or thermal distortion until way later in the process. there's been some interesting movement around ML-aided topology prediction that apparently helps with stair-stepping specifically, and parametric/geometry-aware approaches are getting more attention, for cutting down support bloat, but it still feels like process-level stuff rather than something baked into the solver from the start. curious whether people here are actually constraining for anisotropy and thermal behavior upfront, or just running verification after the fact and iterating until it's acceptable. and whether anyone's found a workflow that genuinely reduces back-and-forth with the printer rather than just adding more steps to the pipeline.

reddit.com
u/flatrive — 10 days ago
▲ 0 r/Design

something I keep running into: you bring up a UX concern in a backend meeting and suddenly the room gets a bit tense. not because anyone's being difficult, just that the framing lands wrong. 'designer is telling us how to build it' energy when that's not the intent at all. what's actually helped me is sharing stuff early and rough, not polished. when you bring a finished Figma file to a dev it reads like a spec to hit, but when you bring a, half-baked wireframe and say 'here's the user problem I'm trying to solve, what breaks on your end?' it becomes a different conversation entirely. this matters even more now that a lot of what we're building has AI-driven or context-aware behavior baked in. those flows are genuinely hard to prototype cleanly, so getting backend in the room early with something scrappy is basically required. you can't hand off a polished spec for a system that adapts at runtime and expect everyone to just figure it out. a few other things that have helped: being upfront about what's a must-have vs what I'm still figuring out. devs are pretty good at working with uncertainty when you're honest about it, it's the false confidence that causes friction. design systems help too because then you're both looking at the same source of truth instead of interpreting a screenshot differently. and when there are user-controlled behaviors involved, like motion preferences or personalization settings, flagging those, early means the backend team can actually plan for them instead of finding out at handoff. the 'overstepping' thing mostly goes away when the framing shifts from 'here's what it should look like' to 'here's what the user is trying to, do, how do we get there together.' curious if others have found a specific ritual or meeting format that actually made this click with their team.

reddit.com
u/flatrive — 10 days ago

been thinking about this a lot lately because most of the conversation around generative design tools still seems aimed at solo users or small studios. once you scale up to a proper team the problems shift completely. it's less about which tool you pick and more about who owns the approved models, how you stop people going rogue with, random vendor solutions, and whether there's any actual feedback loop between the people doing the work and whoever set up the systems. what's interesting is that the teams handling this well in 2025 and into this year, aren't just adding AI on top of existing workflows, they're redesigning the workflows around it. the ones that seem to get the most out of it have some kind of centralized group managing AI tooling rather than every sub-team doing their own thing. keeps outputs consistent and avoids the chaos of five different pipelines producing five different quality levels. there's also a real push now toward building in evaluation layers, basically structured ways to check quality across rapid iterations, rather than just eyeballing it, which matters a lot when you're moving fast and the prompts are doing heavy lifting. the other thing that seems to matter heaps is requiring everything to expose standard APIs from day one. otherwise you're locked in and any model swap becomes a nightmare. with AI-embedded tooling becoming more common across the board, that interoperability question is only getting more urgent. curious if anyone here has actually worked on a team at that scale. what broke first?

reddit.com
u/flatrive — 10 days ago

Freelance AI designer here, mostly doing brand kits and social content for small DTC clients. Two to four deliverables per client per week, nothing huge.

Constraints: solo operation, no dev support, clients expect turnaround in 24 hours, and I can't spend $80/month stacking five separate tool subscriptions.

I've tried Midjourney pipelines with saved prompt templates and Flux through ComfyUI, but both fall apart the second a client wants a variation on last month's campaign. Rebuilding node graphs or re-prompting from memory is eating way more time than it should. I tested Phygital+ briefly for one client's banner series and the brandbook upload actually, helped keep colors consistent, but I haven't gone deep enough to know if it scales.

What I care most about: reusability across projects, not having to re-explain the brand to the model every, single time, reasonable pricing, and something I can hand off notes on if I ever bring in a contractor.

For designers doing repeat brand work solo, what setup actually holds up after month three?

reddit.com
u/flatrive — 10 days ago

been spending a fair bit of time lately going deeper into generative design for actual functional components, not just aesthetic stuff. the NASA/Autodesk A320 bracket case is the one that keeps coming up in my reading - closer to 45% weight reduction with equivalent strength, printed in AlSi10Mg aluminium. that's a real outcome, not a concept render. and the Relativity Space fuel pump consolidation is kind of absurd when you actually sit with it - they collapsed a massive, component count down to something printable as essentially a single part, which is a different category of result than just shaving weight. but honestly my experience trying to replicate even a fraction of that on smaller projects has been messier. the geometry outputs are often genuinely impressive, then you get to the print stage and the support structures become this whole separate problem. post-processing on really intricate lattice stuff is tedious in a way that doesn't always justify the weight savings at smaller scale. feels like the tooling is built around aerospace budgets and tolerances, and when you're working on, something more modest the gap between the render and the physical part can be pretty humbling. curious if anyone here has actually gotten a generative component through to something production-adjacent, or if most, of the real wins are still sitting in aerospace and automotive where the AM infrastructure is already mature. also keen to hear if anyone's tried nTopology vs the Fusion 360 workflow for, manufacturability - I keep seeing it recommended but haven't committed to learning another tool yet. and with more agentic simulation tooling starting to show up in some of these pipelines, wondering if, that's actually changing the iteration speed for anyone or if it's still mostly hype at the practical level.

reddit.com
u/flatrive — 10 days ago

been running into the same wall lately where the generative solver spits out something geometrically interesting but then it's basically, unprintable without heaps of support material, or the surface finish is rough enough that post-processing kills any time you saved upstream. feels like most tools are still optimising for shape and kind of ignoring the actual, manufacturing constraints like thermal distortion and material anisotropy until you're already committed to a direction. Fusion 360's additive solver has gotten noticeably better at reducing support requirements over the past couple of years, and tools like nTopology and MSC Apex Generative Design, have been pushing harder on AM-specific constraints, but I still find myself doing a lot of manual cleanup on lattice structures before anything goes to the printer. hybrid approach with some CNC finishing has helped on a few projects but it adds steps I'd rather not have. what I keep wishing existed is tighter feedback earlier in the loop, like printability and distortion simulation baked into the generative pass itself rather than bolted on after. some of the newer AI orchestration stuff feels promising for chaining those validation steps, together automatically, but I haven't seen a clean out-of-the-box workflow that actually does it yet. curious whether anyone's found an approach that closes the loop between simulation and printability feedback, early enough to actually change design decisions, rather than just flagging problems when you're already committed.

reddit.com
u/flatrive — 11 days ago

been curious about this for a while. I come at nTop more from the creative/generative side than pure engineering, but I keep running into it when projects push toward actual fabrication. the implicit modeling approach makes a lot of sense on paper - especially for sidestepping the mesh reconstruction mess you usually hit after topology optimization spits something out. that part at least seems well-documented and genuinely solved. for anyone using it on real AM work though, where does it actually pull its weight in practice? the DfAM side is what I'm less sure about - nTop clearly factors in AM constraints early in the process, but I'm curious how far that actually goes. like are overhang controls and support reduction something you're actively leaning on, or is it more, of a checkbox that still needs a lot of manual cleanup before anything goes to a slicer? also curious about iteration speed at scale. the pitch is fast variant exploration and parametric flexibility, and I've seen claims of serious performance gains over traditional CAD for complex geometry - but does, that hold up when you're actually pushing weird organic forms or heavily nested lattice structures, or does it start to bog down once things get genuinely complex? basically trying to figure out where the workflow earns its keep vs. where you're still fighting it. would love to hear from people using it on actual fabrication projects right now.

reddit.com
u/flatrive — 11 days ago
▲ 0 r/Design

been thinking about this a lot lately because I've had a few conversations with other designers where it came up. the consensus is still murky, honestly. some people are completely transparent about it, some bury it, and some just don't mention it at all and hope nobody asks. my current approach is to treat the AI stuff like any other tool and just show the process honestly. so if I used generative tools for early ideation or mood boarding, I'll include those rough outputs alongside my refinements and the final direction. the point being that the thinking and decisions are clearly mine, even if some of the raw material came from a prompt. anecdotally, showing the iteration trail tends to land better than dropping a polished final output with no context, regardless of how it was made. it also sidesteps that awkward moment where someone asks "how did you make this" and you have to do some mental gymnastics. worth noting that the transparency bar has genuinely moved. design schools like Parsons and MassArt now require explicit disclosure of AI involvement in submitted work, not just what tools you used but how and why. omitting it is being treated as an ethical issue, not just a stylistic choice. that framing is starting to bleed into professional contexts too, so getting ahead of it in your portfolio seems like the lower-risk move. the harder question I keep sitting with is about how much of the work, is actually yours when AI is doing a significant chunk of the visual heavy lifting. like if you're generating 80% of the output through prompts and doing light editing, is, the framing really the same as if you sketched it out and built it yourself? I don't think there's a clean answer, and I'm not trying to be prescriptive. but I do think the honest version of that disclosure looks different depending on where AI, sat in your process, and audiences are getting better at noticing when that's being glossed over. curious how others are handling the disclosure side of it, especially in more traditional design contexts where the reaction can still be pretty mixed.

reddit.com
u/flatrive — 13 days ago
▲ 1 r/AI_ART

been experimenting with this for a few client projects lately and the short answer is yeah it works, but only if you put in the training work upfront. base SD out of the box gives you inconsistent results, but once you've got a, solid LoRA trained on the brand's visual assets and colour palette, the outputs get pretty reliable. running everything through ComfyUI so i can batch variations without losing consistency across a campaign, which has been a lifesaver for A/B testing different creative directions quickly. the tricky part is still the authenticity problem, and honestly it feels like it's gotten harder not easier. UGC works because it feels unscripted, and AI-generated stuff can feel sterile if you're not careful with how you structure prompts and inject some contextual noise. consumers are increasingly good at sniffing out fake-feeling content too, so the stakes are higher than they used to be. what's been working better for me lately is treating the AI output as a draft, layer and blending it with real captured elements rather than going fully synthetic end to end. curious if anyone's found a good balance between keeping brand guidelines tight while still making the content feel organic rather than like a polished ad. also wondering if anyone's started folding video UGC into this kind of workflow yet or if you're still keeping it image-only for now.

reddit.com
u/flatrive — 14 days ago

been thinking about this a lot lately. most of my work sits in AI art and visual content, but I keep getting pulled, toward generative design tools because the underlying geometry is so much more principled than just prompting. like there's something genuinely different about using parameter-driven optimization to produce forms that aren't just aesthetically pleasing by accident but are structurally or mathematically "correct" in some way. equations like Julia Sets or Barnsley's Fern, fractals, field-based modeling, the outputs have a kind of internal logic that pure diffusion stuff just doesn't. the Quayola work keeps coming up as a reference point for me. reinterpreting classical forms through algorithmic processes, you end up with outputs that feel like they have genuine depth rather than just surface texture. and the idea of taking that further with topology optimization or field-based modeling, not for engineering constraints but for purely aesthetic ones, still, seems like it has a lot of legs, especially now that the tools for defining and encoding those constraints are getting more expressive. what's interesting to me right now is that AI models are getting a lot better at understanding mathematically grounded, intent, so the gap between "describe a form" and "derive a form" is starting to close in interesting ways. evolutionary algorithms encoding natural processes, context-aware generation, it feels like the moment to actually try bridging these workflows seriously. I'm coming at this from the AI art side, so my instinct is to plug ComfyUI into, whatever the output geometry is and treat the math as a creative input rather than an end product. curious if anyone here has actually tried defining aesthetic goals as constraints in a generative design workflow and what that even looks like in practice. like what tool are you even using to set that up, Blender geometry nodes, Samila, something else entirely?

reddit.com
u/flatrive — 15 days ago

Our in-house team handles campaign assets for about four mid-size retail clients. We need a repeatable process for marketing visuals across social, presentations, and product pages without rebuilding from scratch every sprint.

Constraints: two designers, no dedicated dev support, existing Adobe CC licenses we can't drop, and turnaround times that don't allow much manual cleanup.

We tried Firefly for on-brand asset generation and it's fine for one-offs, but the output gets generic fast. Also evaluated Phygital+ briefly since it handles multi-model pipelines in one place, which helped with batch consistency. Claude Design is newer and the Anthropic backing makes it hard to ignore for presentation decks specifically.

What we care most about: brand guardrails that actually hold across a batch, minimal prompt-rewriting between formats, and not needing a separate tool for every output type.

For teams already running a tight two-person operation, did adding a newer AI design tool actually reduce handoff friction or just add another thing to maintain?

reddit.com
u/flatrive — 16 days ago