u/theiriali

what skills actually matter for UGC in tech and AI niches

been doing a bit of UGC work on the side alongside my usual design stuff, and tech/AI brands feel pretty different from the usual beauty or lifestyle gigs. the thing I keep noticing is that audiences in these niches tend to be pretty sharp, at least for, more complex or technical products, and they can usually tell when someone doesn't actually understand what they're talking about. so product literacy feels genuinely important here in a way it maybe isn't for generic lifestyle content. good lighting still matters, don't get me wrong, but clarity and credibility tend to be the bigger differentiators. from what I've seen, problem-solution storytelling lands better than hype-style delivery for this kind of content more often than not. hook fast, show a real pain point, then let the product do the work. that said I do think it's worth testing, it's not a guaranteed formula, just a format that maps well to how people actually evaluate tools and software. authenticity matters heaps here too, but I think people underestimate how much clear explanation matters on top of that. you can be super relatable and still lose people if you can't communicate what the thing actually does in plain language. production doesn't need to be cinematic but clean audio and stable framing are basically non-negotiable if a brand is going to actually use your content. the other thing I've been noticing more in 2025 going into this year is that AI-assisted workflow skills are starting to matter alongside the on-camera stuff. prompt writing, scripting variations, basic editing, knowing how to iterate fast, brands are increasingly testing more creative angles and the, people who can help with that whole loop, not just show up as a face, seem to have an edge. I've been leaning into that myself for ideation and script drafts, not to replace the human side but to move faster and test more. one thing worth flagging: disclosure around AI-assisted or AI-generated content is becoming more of a real conversation, and, the rules vary by platform and region, so worth staying across that if you're working it into your process. curious whether others in tech UGC are finding brands actually care about workflow skills, or if they're mainly just evaluating output quality regardless of how you got there?

reddit.com
u/theiriali — 2 hours ago

What broke first when you scaled AI brand pipelines past 20 variants

Our in-house team hit a wall around variant 20-25 in a batch: character lighting, started drifting across outputs even though the source asset and prompt were identical each run.

Stack is a mix of Flux v2 for stills and Kling for short motion clips, fed through a shared brandbook. We've tried locking CFG values tighter and pinning seed ranges, neither stopped the drift. My guess is the issue is upstream in how style references get re-interpreted per generation rather than cached, but I'm not certain.

I've also been evaluating platforms like Phygital+ that claim to handle brand consistency at, scale through linked assets and pipelines, though I haven't stress-tested it past 15 variants yet.

What actually broke first for teams that pushed AI brand workflows into real production volume, and how did you stabilize it?

reddit.com
u/theiriali — 3 hours ago

how do you actually manage SD workflows when your dataset gets huge

been going down a rabbit hole lately trying to get my generative design pipeline to not fall apart when the dataset scales up. been using LoRA fine-tuning on SDXL for some architectural stuff and it's honestly night and day compared to just prompting a, base model, but the moment I start pushing batch sizes or adding more training data the whole thing gets messy fast. a few things have helped stabilize it: dynamic CFG has been solid for keeping speed up without tanking output quality in large, batches, and breaking things into separate passes for upscaling and inpainting is way more manageable than trying to do everything in one shot. also started using anchor samples at regular checkpoints during training runs which has helped me catch drift earlier instead of burning GPU time on a bad direction. ComfyUI node setups are holding up noticeably better than A1111 at this scale, the modularity just makes it easier to isolate what's actually breaking when something goes wrong. on the data side I've been looking more at streaming ingestion approaches instead of loading everything at once, especially when the dataset keeps growing incrementally. GPU-accelerated data prep tools have also cut down a lot of the preprocessing time that was quietly eating into my pipeline. curious what other people are actually doing when complexity starts stacking up. are you doing anything specific with hyperparameter tuning or just iterating until something sticks? and does anyone have a clean way to keep workflows organised across a bunch of different projects without it turning into chaos? feels like the tooling is there but the organisational layer is still kind of a DIY problem.

reddit.com
u/theiriali — 1 day ago

unpaid design exercises - where's the actual line between screening and free labour

been thinking about this a lot lately, especially with AI tools now able to knock out a wireframe in minutes. if a candidate can use something like Uizard to generate a solid-looking prototype in an afternoon, what's the exercise even testing anymore? process? sure, but you can get that from a portfolio walkthrough and a good conversation. the 4-8 hour take-home thing feels increasingly hard to defend. that's real time, often from people who are already employed and doing this across multiple applications. and there's always that nagging feeling that someone's brief is a little too specific to be hypothetical. short critique sessions or paid tasks seem like the obvious fix, and some companies are moving that way. but plenty aren't. so genuinely curious - has anyone here refused one and still got the job, or walked away and felt fine about it?

reddit.com
u/theiriali — 2 days ago

consistency vs quality in niche content - how are you actually managing this

been running a pretty high output schedule for a while now and the thing, nobody really talks about is how fast the quality ceiling drops when you're pushing volume. for a while I was batching everything, filming a couple days a week, editing the rest, and, it felt sustainable until I looked back at the stuff I was putting out and it was just. fine. technically okay, nothing embarrassing, but nothing I was actually proud of either. the algorithm was happy, engagement was steady, but it felt hollow. what's shifted my thinking lately is that the platforms themselves seem to be rewarding this less now anyway. YouTube especially is visibly favoring fewer, higher-production pieces over frequent uploads - watch time and recommendation signals are doing a lot more work than raw consistency used to. serialized stuff in particular, like multi-episode formats with an actual narrative arc, is punching way above its weight in smaller niches right now. early engagement velocity on a well-crafted series seems to compound harder than a steady drip of decent standalone videos. so what's actually helped me is getting way more ruthless about which formats get the full effort treatment and which ones are basically templated. one or two pieces a week where I'm genuinely trying, rest of it is repurposed or lower stakes. the series format has been interesting to experiment with because it front-loads the creative work but the individual episodes can be leaner. curious if other niche creators have landed on a similar split or if you're still going all-in on volume and accepting the trade-off. also wondering if anyone's found a batching system that doesn't eventually make everything feel identical.

reddit.com
u/theiriali — 4 days ago
▲ 1 r/aiArt

honest breakdown of my current tool rotation for AI art (MJ, SD, GPT Image)

so I've been running a pretty consistent hybrid setup for the last few months and figured it's worth sharing how I actually split, the work rather than just saying "use all of them." Midjourney is still my go-to for anything client-facing that needs that polished, cinematic look. whatever version we're on now, the aesthetic coherence is hard to beat when you, need a hero image fast and the high-res output is genuinely useful for production work. but the moment a client wants something heavily customised or I need to iterate on a specific style, I'm off to SD pretty quickly. the ChatGPT image gen has become way more useful than I expected for mockup work. anything with text in the image, infographic layouts, stuff where positional accuracy matters, it's just less frustrating than the alternatives. I used to avoid it for most things but the reasoning-aware prompting has genuinely changed how I use it for early-stage concepting. still not where I'd go for pure aesthetic work though. Flux is the one I'm watching most closely right now. the photorealism is kind of ridiculous and running it locally means I can batch things without worrying about per-image costs stacking up. SD with a tuned model still holds up well for workflow flexibility, LoRA training especially, but the learning curve is real if you're not already comfortable with ComfyUI or similar. for anyone just starting out the setup overhead is probably not worth it unless you have a specific reason to go local. worth noting that API pricing across most of these tools has been shifting around lately so the cost math on hybrid setups keeps changing. something to factor in if you're billing clients or running any kind of volume. curious whether others are running similar hybrid setups or if you've settled on one tool for most of your work.

reddit.com
u/theiriali — 4 days ago

AI image generators

Been lurking the AI creative space for a while. Everyone and their mom is building an AI image generator — stills, video, avatars, product shots. Some of them are genuinely impressive, but the bottleneck isn't output quality anymore. It's discovery + workflow.

We have 30+ platforms all claiming to be the "best AI image generator" or "best AI art generator."

Each one has its own prompt syntax, credit system, and UX nightmare.

Users are copy-pasting the same prompt across 5 different tools just to get ONE consistent character or style output.

The gap I see:

A unified prompt manager + style transfer hub. Think "Civitai meets Notion" for AI creators.

Closest thing I've found so far is Phygital+ with 30+ models in one workspace, which genuinely helps on the generation side. But even there, the cross-platform prompt portability problem is still mostly unsolved. You're still manually re-engineering prompts every time you switch models.

What's still missing:

Prompt library that auto-translates between platforms (Midjourney → Leonardo → Firefly syntax).

Character/style consistency vault. Upload one reference, apply it across tools — not just within one.

Credit dashboard: track usage across ALL your subscriptions in one place.

Batch generation: trigger the same prompt on 3 platforms simultaneously, compare outputs side-by-side.

Why it could work:

tool-agnostic prompt portability = sits on top of everything, not competing with any single platform.

The catch:

API rate limits + platform ToS. But as a pure creative workflow layer, it probably survives.

Questions:

Real need or am I overthinking it?

Anyone found something that solves this problem?

Biggest pain point when jumping between AI image tools?

reddit.com
u/theiriali — 4 days ago

how do you actually stop AI from flattening your design vision

been thinking about this a lot lately working across generative tools and Figma Make, Adobe Firefly, the whole current ecosystem. the outputs are fast, sometimes genuinely useful for kicking off ideation, but there's this pull toward sameness that's hard to ignore. like the AI has clearly seen a lot of Material Design and Dribbble and it absolutely shows. the bias problem feels underrated in most conversations. it's not just about generic aesthetics, it's that the model's training data has preferences baked in and if you're not actively, fighting that with constraints and curation, you end up with stuff that looks fine but could be from any product anywhere. competent, soulless, shippable. what's interesting now is that some people are routing around this with multi-agent setups, like using one model for raw, ideation and a separate one for layout refinement, so you're not just getting one model's median taste applied to everything. more targeted control, less single-model mediocrity. still early but the logic makes sense. reckon the designers getting the most out of these tools are treating AI output as a starting sketch, not a direction. you still have to bring the weird, the specific, the brand-true stuff yourself. AI is good at volume, not at knowing why your client hates rounded corners or what their users actually find trustworthy. curious how others are handling this in practice, especially when stakeholders see the AI mockup and just want to ship that.

reddit.com
u/theiriali — 5 days ago

How do you keep a character consistent across 10+ AI video clips

Small content studio, two people, producing short-form product ads for a handful of e-commerce clients. We need the same character face and outfit to hold across a full campaign, not just one clip.

Constraints: no dedicated GPU setup, mid-tier budget, clients expect turnaround in 2-3 days per batch.

We tried Kling for the video side and Midjourney for reference frames, but the character, drifts noticeably between shots even when we lock the seed and reuse the same image prompt. Also tried Phygital+ briefly since it has Kling and Flux in one place with some, consistency tooling, but haven't gone deep enough to know if it solves the multi-clip drift problem.

What we care most about: character stability across clips, reasonable generation time, not needing a separate, subscription for every model, and something a non-technical editor can actually run without me babysitting it.

For people doing recurring character-driven video ads, what's actually holding consistency for you at the, clip-to-clip level, and does anything break down once you're past 10 clips in a single campaign?

reddit.com
u/theiriali — 5 days ago

been sitting with this question for a while after spending a lot of time with nTop for AM geometry work. most of the conversation in this sub (and honestly most of my own posts) tends to, stay in the lattice optimization and printability zone, but the tool clearly does more than that. the medical implant angle is the thing that shifted my thinking a bit - patient-specific implant geometries where the, design logic is fully traceable and repeatable is a genuinely different category of problem than just making a bracket lighter. worth noting i haven't seen this validated in depth for regulated medical workflows specifically, so if anyone here, has actual experience with that use case i'd be curious what the process actually looks like end to end. the multiphysics stuff though is pretty well documented at this point - being able to drive geometry directly from simulation field, data and run multi-objective optimization across manufacturing constraints at the same time, that's not really an AM story, that's just engineering. the field-driven design approach in nTop has been a real thing since at least nTop 4, and the 2025 webinar content makes clear it's still the core of where they're pushing the platform. the F-16 hydraulic clamp case is the one i keep coming back to when people ask if this stuff is real. 2x stiffer, manufacturable on demand, iterated properly through design-optimize-build-test. that's an aerospace structural application, not a 3D printing showcase. i think the tool's reputation as an AM thing is partly just because that's, where early adoption concentrated, and the Materialise and Hexagon integrations have kept that story loud. but the field-driven approach seems like it applies anywhere you're dealing with complex geometry and real physics. curious whether anyone here is actually using it for non-AM applications - thermal management, CFD-driven design, tooling, anything like that, - and whether the workflow holds up the same way or gets messier once you're not outputting to a printer.

reddit.com
u/theiriali — 6 days ago

been using nTop on and off for a while now, coming at it more from the creative/generative side than hardcore engineering. the implicit modeling stuff holds up really well for complex geometry, lattices, all that, no argument there, it's genuinely one of the stronger tools for that kind of work. but the parts that keep slowing me down are less about the core modeling and more about what happens after. powder removal from internal channels is a constant headache, nTop helps you design for AM but it doesn't magically solve the post-processing reality. and the metrology situation for anything with real geometric complexity still feels like you're guessing until you pair it with dedicated inspection software on the side. that gap hasn't fully closed. the slicer integration has also been hit or miss. the Magics handoff in particular has introduced issues I didn't catch until way later in the process, which is a frustrating place to find problems. curious if that's improved for anyone recently or if it's still workflow-dependent. the learning curve is real too. it's not a set-and-forget solver, you're actively building logic, which is genuinely interesting, but it, also means mistakes compound fast if you don't have a solid handle on what you're constructing. been leaning on hybrid workflows lately, roughing out complex geometry in nTop then exporting for detailing elsewhere, which feels a bit clunky but gets the job done. with AI-assisted design becoming more of a baseline skill in AM roles right now, I keep wondering how, much of this friction is just workflow immaturity versus something nTop needs to address on the integration side. has anyone found cleaner ways to handle the post-processing piece specifically, or is powder trapping just something you design around from jump?

reddit.com
u/theiriali — 7 days ago

been sitting with this question for a while now. I use generative tools pretty heavily in my workflow, mostly for early ideation and visual direction, and I keep noticing this tension, where AI is great at flooding you with options but Design Thinking is supposed to be about slowing down and understanding people first. like the empathy and define phases feel almost at odds with just blasting out 100 image variations and picking the one that vibes. had a project recently where I brought in Midjourney way too early and it, basically anchored the whole team to certain visual directions before we'd properly understood the problem. nobody flagged it in the moment but looking back we'd quietly skipped a bunch of steps. the images just made everything feel more resolved than it actually was, which is a weird kind of trap. and honestly this feels more relevant now, not less. there's a real anti-sameness backlash happening in design right now where polished AI output is starting to, read as generic, and the differentiator is increasingly the human judgment and strategic thinking you bring to it. so leaning on generative tools too early doesn't just risk anchoring your team, it risks producing work that looks like everyone else's work. my current thinking is that AI earns its place as a co-pilot for execution, not a starting gun for ideation. keep it out of the empathy and define phases entirely, bring it in hard once you actually know what problem you're solving. but curious if others have found a smarter integration, especially on brand or identity projects where the stakes for getting the strategy right are higher. does the speed of generative tools end up compressing your process in ways that hurt the, output, or have you figured out how to use it without the shortcuts becoming a crutch?

reddit.com
u/theiriali — 8 days ago

non-devs vibe coding their way into your design system - good or bad

been noticing more PMs and marketers on my team spinning up small tools and internal, dashboards using AI coding assistants, Cursor, Lovable, that whole wave, with zero real dev involvement. which is cool in theory, but the output is. a lot. inconsistent spacing, wrong font weights, components that look vaguely right but are clearly just vibes rather than anything pulled from the actual system. reckon the core issue is that the AI has no idea your design system exists unless you explicitly tell it. and most non-devs don't know what to feed it, so you get this Frankenstein UI situation where everything is almost correct but nothing quite matches. close enough to ship, far enough to make a design systems person twitch. what's making this more urgent now is that these tools keep getting better at generating, plausible-looking UI fast, which means the volume of "almost right" output is only going up. the gap isn't really the AI's fault, it's a context problem. garbage in, inconsistent UI out. has anyone actually cracked a workflow here? like are people maintaining AI-specific context docs, token references, component usage rules, spacing logic, that non-devs can drop straight into a prompt? or is it less about tooling and more of a governance conversation about who actually gets, to ship what and what review looks like when a non-dev is the one holding the cursor? genuinely curious what's working. feels like this is becoming a real design systems problem, not just a one-off annoyance.

reddit.com
u/theiriali — 8 days ago

New ones drop every week and I genuinely can't tell what's holding up in real production vs what just demos well. Tried a couple of the newer agent-style tools lately and most fall apart the second the task gets remotely specific.

I'm a small team, London-based, mostly brand and campaign work. Budget isn't the issue so much as reliability and not having to babysit every output. I've poked at Manus, though I'm honestly not sure how widely available or established it actually is at this point, plus been evaluating Phygital+, though I've had, trouble finding much solid info on what it actually offers or how it's positioned, but still not sure what's actually earning its place in people's daily stacks.

What are you actually opening every day, and does it hold up when the brief gets complicated?

reddit.com
u/theiriali — 8 days ago

been going down a rabbit hole on using generative design specifically for cooling channel geometry in AM parts. the appeal is obvious - you can get these organic, conformal paths that actually follow the thermal load distribution instead of just drilling straight holes through a block. traditional machining just can't touch that kind of internal complexity. what I keep running into though is the gap between a solver output that looks, thermally optimised and something you can actually print without the internal channels collapsing or trapping powder. that gap is closing though - tools like Autodesk Fusion Generative Design and ToffeeX are, treating AM constraints as first-class inputs now, not something you bolt on after the fact. ToffeeX in particular is doing physics-driven generation that apparently respects things like minimum channel width and wall thickness, at the solve stage, so you're not just doing topology opt and then manually figuring out printability after. Panasonic actually shipped this - used Autodesk generative design for conformal cooling channels in fan blade molds, and got something like 20% reduction in cooling time vs straight channels, manufactured on a LUMEX hybrid machine. that's not a prototype, that's production. with AI data centers now running 10-100kW racks and liquid cooling basically becoming the baseline, the pressure, to iterate on conformal channel geometry faster is real and the tooling seems to be catching up. for anyone working on heat exchangers or injection mold tooling with serious thermal management requirements - are you finding the generative-to-AM, pipeline actually holds up end to end, or is there still meaningful manual cleanup before you get to a printable file? curious whether the constraint-aware solvers are actually saving you post-processing time or just shifting where the pain is.

reddit.com
u/theiriali — 9 days ago
▲ 11 r/comfyui

Been messing around with Kijai's Prompt Relay setup for LTX 2.3 the past few weeks and honestly the temporal control is pretty impressive for what it is. Assigning prompts to specific beat segments keeps subject continuity way better than I expected, especially on 6GB VRAM where things usually fall apart fast. Short clips in the 5-10 second range are genuinely solid. The 30 second thing is where it gets messy though. I've managed to get there by chaining segments and using extension workflows, but around the 20, second mark you start seeing flicker and motion artifacts that are hard to fix in post. Feels less like a hard limit and more like the model just wasn't trained for that duration, so it kind of loses the plot. The GIMMVFI interpolation helps smooth things out a bit but doesn't fix the underlying weirdness. On the resolution side, native 8K seems like a stretch for LTX specifically. The DyPE node does enable higher res for Flux models without upscaling, but for, LTX you're still basically relying on RTX Video Super Resolution to get anywhere near 4K. Calling it "8K" at that point feels a bit generous. Curious if anyone's found a workflow that actually holds up past 20 seconds without the artifacts getting bad, or if the current approach of chaining shorter clips and stitching is just the way to go for now.

reddit.com
u/theiriali — 9 days ago

been thinking about this a lot lately. I'm at a point where I'm doing more creative direction and stakeholder stuff than actual hands-on design work, and I'm trying to figure out if I'm already in the transition or just drifting into it accidentally. curious whether people who've made this shift did it deliberately or kind of stumbled into it. the thing I keep running into is the identity piece. like I genuinely love making things, and there's a real grief in stepping back from that to spend more time in strategy decks and feedback sessions. I've been trying to reframe it as designing at a different scale - shaping how a whole team works or how, a project gets scoped - but that's easier said than done when you're used to measuring your day by what you shipped. anyone else find that part hard to sit with? one thing that's made this weirder recently is AI. like, a lot of the execution work I used to own is getting drafted faster, now - concepts, mockups, copy directions - which in theory frees you up for strategy. but it also blurs the line even more between "I made this" and "I directed this." if anything it's accelerating the identity question, not resolving it. I've heard the portfolio angle a lot - shift it away from final outputs and toward process, decisions, team direction. that makes sense to me. but I'm also wondering how much of this transition actually happens through formal moves vs just taking on more strategic work and hoping it gets noticed. feels like right now there's also more pressure to demonstrate long-term brand thinking and consistency rather than just, shipping cool stuff - which maybe plays in favor of the strategic framing if you can articulate it well. what actually moved the needle for you?

reddit.com
u/theiriali — 10 days ago

so I've been spending a fair bit of time with nTop lately, coming at it more from the creative/generative side than pure engineering. the implicit modeling approach is genuinely interesting - the whole pitch around handling arbitrary geometric complexity without, the crashes you'd normally get in traditional CAD actually seems to hold up for the most part. the medical implant stuff is a good proof point, generating patient-specific bone plates automatically with full traceability is not a small thing. what I keep coming back to though is the learning curve. it's not a black box tool where you just set some loads and wait for a result - you're actively, defining the logic and constraints yourself, which gives you more control but also means you need to know what you're doing. the embedded FEA stuff is useful for catching problems early but it doesn't really save you if your constraint setup is off from the start. curious if anyone here has pushed it on genuinely weird geometries, like not the standard bracket optimization examples. how far does the 'handles arbitrary complexity' claim actually go before it starts breaking down or requiring serious workarounds?

reddit.com
u/theiriali — 10 days ago

been running into this a lot lately where Fusion 360 spits out something genuinely beautiful and, well-optimized, then the second I start thinking about slicing it the whole thing becomes a support nightmare. the T-Spline geometry it generates is great for organic form but standard 3-axis FDM just wasn't built for those flowing, non-planar surfaces. I keep having to either pile on so much support that the weight savings feel completely pointless, or go back and, redesign chunks of the geometry by hand, which kind of defeats the whole purpose of running generative design in the first place. 5-axis is obviously the cleaner answer for non-planar surfaces but that's still a pretty big jump for most desktop setups. one thing I've been experimenting with is going back into the T-Spline output before export and adjusting, strut thickness and surface continuity so the geometry is at least more FDM-friendly without gutting the optimization. build orientation is doing a lot of heavy lifting too, more than I expected. also seeing some interesting stuff around using data-driven approaches for topology-optimized infill patterns that try to, balance strength and porosity in a smarter way than just grid or gyroid, which feels relevant here. curious what others are actually doing in practice though. are you editing the GD output directly, leaning on slicer settings, or just accepting that, some of these forms need a different process entirely to get off the bed cleanly?

reddit.com
u/theiriali — 12 days ago

Anthropic just dropped Claude Opus 4.7 and it's clearly aimed at boosting production workflows, software engineering, agentic tasks, and multimodal work. Worth discussing how it actually stacks up against Runway for people doing real creative work.

I'm an art director running campaigns for 3-4 brands at a time, mid-sized budgets, no dedicated dev resource on the team.

Runway's strength is obviously video generation and consistency across clips, Gen-4 handles motion in ways Opus 4.7 almost certainly won't touch at launch. The weakness is that each output still needs manual chaining if you want a repeatable branded pipeline. Opus 4.7 does show improved capabilities around interfaces, docs, and design-to-code conversion, but it's positioned as a general-purpose model built around software, engineering and agentic workflows rather than a dedicated design tool, and the production-scale workflow story for static creative work isn't fully there yet.

My decision criteria: character/style consistency across formats, how fast I can build a reusable pipeline vs. one-off outputs, and how much prompt babysitting is involved per asset.

I've been testing a few multi-model platforms alongside both, tools like Phygital+ let you chain models including Runway outputs into repeatable workflows, which helps when you're juggling formats. But the question is whether Opus 4.7's improved visual and multimodal reasoning closes that gap for static-first teams who don't need video.

reddit.com
u/theiriali — 12 days ago