r/aitubers

What AI video tool actually feels beginner-friendly but still usable long term?

I’m mainly looking for something simple. text or image in, short usable video out. What AI video tools are you genuinely using in your workflow right now?

reddit.com
u/Rough--Employment — 7 hours ago

Which TTS are Sleep Content Channels using??

Which TTS are Sleep Content Channels using??

Hey everyone, I’ve been researching long-form sleep/relaxation channels like Sleepy Science Channel and similar creators.

I’m really curious which TTS models/platforms these channels are actually using for their narration.

What I don’t understand is this: how can these channels create a video every day with 2 hours of AI voices without being bottlenecked by credit usage?

I’m currently using ElevenLabs, but with my plan it feels hard to scale to daily 2-hour uploads.

Are they using:

  • a different TTS provider
  • API pricing instead of normal subscriptions
  • custom voice clones
  • local/open-source models
  • some other workflow I’m missing

Would really appreciate any insights from people who’ve worked on these kinds of channels.

reddit.com
u/Due_Ring2543 — 9 hours ago

Which AI video tool is best for an artist on a budget?

I worked in my field for years until I got laid off and things went south. I ended up doing whatever I could to pay the bills like flipping burgers at McDonald’s, stocking shelves, and even washing cars. During that time I got back into drawing. It was just an old hobby and even though a former coworker thought I could go pro I knew my skills were not at that level yet. Eventually I saw that AI channels were trending online and decided to give it a shot.

I started with AI music using Suno but that did not go well at all. My taste is a bit niche and people in the comments were really trashing my stuff which was pretty demoralizing. I decided to change my approach and used my own sketches and scripts to make videos. I was using Sora at first but lately it feels like they might be shutting down their servers entirely cuz the videos it generates have started looking very distorted and strange. I have been researching alternative platforms on reddit and noticed that kling and dreamina seedance 2.0 are hot discussed models lately.Considering my specific needs, which one do you think is the best choice for me in terms of both features and price? Or are there other better options that I should consider instead?

reddit.com

Disclose altered or synthetic content

Hey guys, what's your thoughts on this?

I asked it if I didn't need to check the disclose altered content box in the setting for the video if everything in the video was clearly fantasy but I had one scene in that I generated which looked like realistic people

"The policy focuses on whether the content could mislead a viewer into thinking a real-world event occurred. If a specific scene, like characters in a coffee shop, appears realistic enough that it could be mistaken for actual footage of real people, it would generally fall under the requirement to mark the altered content box...
For creators who consistently fail to disclose this information, YouTube may apply penalties. These penalties can include the removal of the content or suspension from the YouTube Partner Program. While disclosing content as altered or synthetic does not impact its eligibility to earn money or its reach, repeated failure to provide this transparency when required by policy can lead to more serious enforcement actions." - Youtube Studio

You guys think a mass ban has to do with this or no?

reddit.com
u/-Takezo — 2 hours ago

YT MASS DELETING CHANNELS AND I AM SCARED

So i already have a working channel on anime,I used my real voice there..but now I wanna expand..and I made an english channel with ai voiceover .but editing and scripting all by myself..I even edited the voice a lil...but seeing how many people's channel getting demonetized,i am so scared...and if it gets wrong my anime channel will get demonetized too because of circumvention policy...if anyone have any opinion,please reply

reddit.com
u/Love-me1320 — 1 day ago

Average Stats for New Channels?

First question: Is there any repository that tracks average stats for a new channel?

Second question: Is there somewhere I can look up what sort of metrics videos typically need to be hitting in order to get X amount of views?

I started a shorts channel three days ago. I think it's performing pretty well, but I'd love to figure out what the baseline is, as well as predict how my videos might perform in the coming days.

Thank you!

reddit.com
u/PrincessErieanna — 9 hours ago

AI MARKETING THROUGH INSTAGRAM

I am wondering what can be the workflow

For free

Or

Near free video generation

Purpose:

I'll be posting videos on Instagram to make engagement.

I have an edtech company, and it needs students to buy the courses

So yeah in short , I'll be using instagram as a lead generation platform.

Any workflow and suggestions!!!?

reddit.com
u/AccomplishedQuit8257 — 21 hours ago

Anyone using AI tools to translate videos for new audiences?

I have been looking into faster ways to repurpose videos for different languages without manually re-editing subtitles and voiceovers every time. Some newer AI tools seem to handle subtitles, dubbing, and timing in one workflow, which sounds much easier than doing everything separately
.For creators trying to reach viewers in other regions, has anyone here actually used an AI video translator in real projects? Curious which tools
gave natural results and which ones sounded robotic.

reddit.com
u/LuckyTreat8962 — 1 day ago

anyone using AI vocal synthesis for YouTube intros?

I’ve been testing a few AI audio tools recently for YouTube content production, mainly for short intro hooks and recurring audio branding elements.

I spent some time using Suno. It’s very fast and basically one-click generation. It’s a fully generative, end-to-end song creation tool, which makes it very easy to turn an idea into a complete musical piece including vocals, melody, and arrangement.

However, its main limitation isn’t whether it can generate music, but rather the lack of control. Things like vocal articulation, timing of phrases, emotional intensity, and precise alignment with video cuts are hard to fine-tune. In practice, you often have to regenerate multiple times and rely on trial and error.

It also heavily depends on prompt quality for stylistic consistency. The same prompt can produce quite different results, so it’s more suitable for ideation sketches or quick demos rather than precise audio design.

I also tried ACE Studio, which is more aligned with a vocal synthesis / virtual singer workflow rather than full song generation. It uses MIDI and lyrics to drive vocal performance, which gives you much more control over timing and expression.

The tradeoff is that the workflow is more complex, closer to a lightweight DAW-style production process.

Curious if anyone here is actually using AI vocal synthesis or AI music tools for YouTube content? any better recommendations?

reddit.com
u/Susan_656 — 20 hours ago

I need help finding ai apps for thumbnail and video editing

Im a new YouTuber and i cant make a thumbnail or edit a video and I’ve been looking for apps but all the apps i have to pay money to download the thumbnail so can you guys please help me

reddit.com
u/Public-Detail2041 — 1 day ago

Are there actually any free AI tools for making Shorts?

Hey guys, I’m at work right now and started watching a few videos about people using AI to make Shorts and supposedly making some passive income from it.

Didn’t really think much of it at first, but then one of my friends told me he actually made some money this week doing it, so now I’m kinda curious.

I looked into it a bit and it seems like most of the tools people use (like for those “fruit love island” type videos or ranking clips) all cost money.

Are there any actually free tools that work for this? Like either AI video generators or something that can auto edit clips into Shorts for a niche?

Or is it basically one of those things where you have to pay if you want it to work?

Appreciate any help.

reddit.com

How I hit 100k+ views and 65% retention with a $15/month production budget

Yo guys,

I’ve been grinding in the "cinematic noir/philosophical" niche on YouTube, and I finally found a workflow that actually gets results. My last few shorts hit 100k+ views with crazy high retention (65% average).

The best part? My production cost is basically just $15/month. I wanted to drop my exact stack for anyone trying to get into this without spending a fortune.

  1. Visuals (The "Atmosphere")

You don’t need to pay for stock footage like Artgrid.

The Hack: If I can't find a dark, urban clip on Pexels/Pixabay, I use Wisk. It’s insane — you can generate unlimited cinematic images for free.

  1. Scripting & The "Mental Slap"

I use a technique I call "the mental slap". You have to start with a harsh truth within the first 3 seconds. If you don't hook them there, you're dead.

  1. The Voice (The Key to retention)

Robot voices kill the vibe. I needed a professional voice , but my budget says no lol.

I found this Telegram bot u/EasySpeech_bot that’s been a life saver

It has a voice called "The Oracle" that is perfect for that deep, noir aesthetic.

It’s $6.99 for unlimited generations. I just dump my .txt files there.

  1. Captions (Crucial)

Don't use auto-captions. I use Subtitle Edit to make custom .ass files. High contrast, clean fonts. It makes a huge difference for people watching on mute.

Anyway, that’s the gist of it. If you’re struggling with your views, stop focusing on the algorithm and start focusing on the vibe.

Happy to answer any questions about the noir style or how Peace! ✌️

reddit.com
u/Elvin_kg — 1 day ago

How Seedance 2.0 restructured my AI tuber content pipeline and what I wish I knew earlier

Been creating AI tuber content for about 14 months. Started on Runway, moved through Pika and a long stretch on Kling 2.1, and recently gave Seedance 2.0 a proper deep dive after initially dismissing it. Want to share what actually changed for me and what the workflow looks like now.

The first thing that surprised me was how differently Seedance responds to prompting compared to Kling. With Kling I had a whole library of cinematic prompt language. Volumetric, shallow depth of field, film grain, golden hour. These worked. When I applied the same vocabulary to Seedance I got mediocre results. Took me a few days to figure out that Seedance responds much better to what I call behavioral prompts. You describe what the subject is doing and feeling, not what the frame looks like. "A young woman slowly turns toward the camera, expression shifting from distracted to surprised" outperforms "cinematic medium shot, natural lighting, shallow focus" in Seedance by a significant margin. Once I adjusted my prompt library to this style, quality jumped immediately.

Second shift: shot length. For AI tuber content specifically, where you need a recognizable recurring host, anything over six seconds starts introducing visible drift. Eyes behave differently. Hair movement loses its logic. For a 60 second video I now generate roughly 12 to 15 separate clips at 4 to 5 seconds each and cut between them. It is more work in the edit but the output looks substantially more intentional and less artificial.

Third: character consistency. Seedance 2.0 is genuinely better than Kling 2.1 at maintaining a character across clips when you give it a clean reference image. What works for me is a tight neutral expression headshot and a 45 degree angle shot as anchor references. I generate both at the start of any project and use them consistently. Consistency holds well for 4 to 6 second clips. Beyond that it needs more manual correction in post.

On Seedance vs Kling 3.0 specifically: Seedance handles individual human subjects better. Kling 3.0 handles complex scenes better. If your AI tuber content is one or two hosts talking or reacting, Seedance is the better tool right now. If you are doing episodic content with multiple characters and environments, Kling 3.0 still has an edge on scene coherence.

On the audio side: ElevenLabs for voice, Suno for music. Nothing exotic there.

The workflow change that saved me the most time overall was consolidating my script breakdown and generation queue into a single place instead of jumping between a doc, a prompt spreadsheet, and the model interface. I landed on using Atlabs for this part of the pipeline. It handles the script to segment breakdown and lets me queue generations without constant context switching. For solo creators doing volume AI tuber work, that kind of consolidation matters more than I expected.

If you are on the fence about Seedance 2.0 for AI tuber content specifically: yes, with caveats. Invest the first week rebuilding your prompt library around behavioral language instead of translating Kling prompts directly. That single change made the biggest difference for me.

One last thing: do not fight the short clip instinct. The community norm of longer generations to get more out of a credit is actively hurting output quality for character work. Generate shorter, cut more, and your audience will not clock the seams the way they clock drift on a 10 second clip.

Happy to go deeper on prompt structure or character consistency workflows if anyone wants specifics.

reddit.com
u/siddomaxx — 1 day ago

I have been making an episodic AI series for 4 months and here is what actually keeps viewers coming back

Four months ago I published the first episode of my AI animated series. It was rough. The character looked different in every scene, the audio timing was off, and the story felt like five unrelated scenes stitched together. I got maybe 200 views and two comments, one of which was asking if I was okay.

Now I am sitting at episode 9, roughly 2,800 subscribers on YouTube, and I get regular comments asking when the next episode drops. That feels surreal to me because I still use mostly low-cost tools and I work maybe 10 to 15 hours a week on it.

I want to share what actually moved the needle because I see a lot of posts here that focus on which model just dropped and which one is the hottest right now. Yes, model quality matters. But it is maybe 20 percent of what makes an episodic series actually work. The other 80 percent is stuff most people skip entirely.

The single biggest thing was creating a character bible before I generated a single frame. I documented my main character in obsessive detail. Color codes, clothing descriptions, facial structure, the exact prompt language that reliably produced her look. When you are generating across multiple sessions and multiple tools, your character will drift badly unless you have this locked down. I use a reference sheet with tested prompts and I always run any new model through that reference before using it for an actual episode.

The second thing that changed everything was treating the script like it actually mattered. Early on I would generate visuals first and then write narration around whatever looked interesting. The result felt chaotic and disconnected. Now I write a proper scene breakdown before touching any generation tool, including emotional beats, pacing notes, and what each shot needs to do for the story. I generate visuals to serve that script. Sounds obvious but most people I see here are doing it backwards and wondering why their episodes feel like random clips.

Third thing is audio. I cannot overstate this. A well-mixed voiceover and a score that fits will carry mediocre visuals. Bad audio will destroy beautiful visuals. I started spending more time on voice pacing, ambient sound layering, and making sure the music actually tracked the emotional arc of each scene. My retention numbers jumped more from audio work than from any visual upgrade I made in those four months.

On the model side, the landscape has shifted a lot in the past few weeks. Veo 3.1 is getting serious attention for longer cinematic shots and I think it deserves it. Seedance 2.0 is also getting a lot of love here and the motion quality on character close-ups is noticeably better than what we had six months ago. I have been running a multi-model approach lately, testing different tools on the same prompt and picking the best output per scene rather than committing to one model for a whole episode.

For that kind of cross-model comparison, I have been using Atlabs over the past few weeks. It lets me run the same prompt through Kling, Seedance, and Veo from one place and compare results without juggling multiple logins. Not the only way to do it but it has streamlined the evaluation step and saved real time during production.

The thing I most want to push back on is the idea that the best-looking series win. They do not. The channels that are growing consistently right now are the ones that figured out how to create emotional investment across episodes. Mystery, stakes, character growth, something to come back for. The AI tools are just the brush. You still have to know what you are trying to paint.

If you are starting out, episode one does not need to be great. Episode nine can be. Just commit to improving one specific thing per episode and you will get there faster than you think.

reddit.com
u/siddomaxx — 2 days ago

I cannot come up with good name for my Youtube Channel

Hi everyone, i am thinking for a good name for my Youtube channel for about 3 weeks. I want to livestream something like Ai Sponge. In the future i would like to plan more shows. I got inspiration from old czech animation studio called aiF studios. I want it to end with studios.

PS: Sorry if i made mistakes in text but my english is not best.

reddit.com
u/Macko_SK — 20 hours ago

Any Youtube Automation Services?

Does anyone know any youtube animation services out there? This is for people getting started, helping you improve your channel or can build a Youtube channel for you.

I already have some services in mind. Just wondering if anyone knows of anymore services.

reddit.com
u/Purple_Ride5676 — 1 day ago

Is someone willing to pay for my view max subscription?

I really need help guys like I really want to blow up, but I am not financially stable. I literally have no money but I really want to make it up there and I’m just here asking for help if anybody would be willing to help me I am so done right now. I can’t pay for this Like I’m so down bad right now. I really need help. I really want to make it up there and I have the ideas and I can really blow up with the right quality.

reddit.com
u/InterviewNew1098 — 1 day ago
▲ 4 r/aitubers+2 crossposts

My full faceless YouTube pipeline, $0 in monthly subscriptions. Including how I handle music (the part that was killing my margins).

Been running a faceless YouTube workflow for about 6 months and finally got my monthly tool spend to zero. Not because I cheaped out, because I moved every recurring cost onto local tools that run on my Mac. Sharing the actual pipeline because the "here is my AI workflow" posts I read a year ago saved me a lot of learning time, and I want to pay that forward.

Context on the channel. Mid-five-figure subscriber faceless channel, explainer-style videos in the 8-15 minute range. AI scripting, AI voiceover, AI visuals, heavy editing. The kind of thing that eats through tool subscriptions if you are not careful.

The pipeline, step by step, with tools and costs:

  1. Script generation. Claude and ChatGPT, alternating based on what the video needs. Paying for both at Pro tier. This is the one cost I still pay because it is where the actual quality ceiling is.
  2. Script polish and fact-check. Manual, no tool. The AI first drafts are never publishable without heavy editing. This takes me 2 to 4 hours per script.
  3. Voiceover generation. I used to pay ElevenLabs at $99 a month for the Creator tier. At 8-12 videos a month with long scripts, I was blowing through character limits constantly and upgrading.

I moved this to Murmur, a local TTS app that runs on Apple Silicon. Fully on-device, no monthly cost, no per-character pricing. Voice quality is behind ElevenLabs v3 for character voice work but for faceless channel narration with a consistent single voice, it is more than good enough. Saved me about $1200 a year. Disclosure that I also built Murmur, which is how I ended up going down this local-first path in the first place.

  1. Visual generation. Mix of stock (Pexels, free), Midjourney for specific scenes, and Runway for motion. Midjourney at $30 a month is the one visual cost I cannot replace with anything local yet.

  2. B-roll. AI-generated plus stock plus screen recordings. No subscription, all free or one-time.

  3. Background music. This was the part that was killing my margins and where the pipeline got most interesting.

The music problem specifically. A 10 minute video needs maybe 3-6 different music cues. Intro, main sections, transitions, outro, occasional dramatic moments. For a while I was on Epidemic Sound at $15 a month. It was fine but the tracks were recognizable across channels in my niche and I kept running into the same cuts other creators were using.

Tried Suno. Great quality but at 10 tracks per video times 10 videos per month, I was burning through credits in 2 weeks. Their pricing does not fit a high-volume background music workflow.

What I moved to: LoopMaker, another local app that generates music on Mac. One-time $49 purchase, unlimited generations, fully offline. Built on ACE-Step 1.5 which is an open-source music model that benchmarks between Suno v4.5 and v5 in quality. I generate 3 to 5 variations of each cue I need, pick the best one, drop it in the edit. Done.

Also my app, same disclosure applies. I built both Murmur and LoopMaker because the subscription economics for AI tools stop making sense past a certain volume and I wanted tools where the unit economics were different.

What LoopMaker handles well for video work:

  • Cinematic and dramatic backgrounds for explainer content
  • Lo-fi and chill beds under voiceover
  • Upbeat electronic for intros and outros
  • Ambient texture for mood transitions
  • Genre-matched tracks for themed videos (retro synthwave for 80s content, orchestral for history content, etc.)

Where I still use other tools for music:

  • If a video is specifically about a song or genre, I still use Suno because its vocal quality on polished tracks is higher
  • For the rare video that needs something specific I cannot prompt well, Epidemic Sound has a one-off per-track pricing that I use maybe once a month
  1. Editing. Final Cut Pro, one-time purchase. Not touching Premiere's subscription.
  2. Thumbnails. Photoshop plus Midjourney (already paid), manual arrangement.
  3. Upload and scheduling. YouTube Studio, free.

Total monthly tool cost for this pipeline after the switch: Claude Pro, ChatGPT Plus, Midjourney Standard. Roughly $80 a month total. Used to be over $250 a month before I consolidated.

The one-time purchases: Final Cut Pro, Murmur, LoopMaker. Roughly $400 total spread across a year of buying them. Paid back in a few months from saved subscription costs.

The honest caveat. Moving to local tools trades monthly cost for upfront effort. You need an Apple Silicon Mac (M1 or newer). The learning curve is real, and some workflows are less polished than paid cloud tools. If you are making your first 10 videos and figuring things out, the subscriptions are worth it for the lower friction. At volume, the math flips.

Links for the tools I mentioned that were not obvious:

Murmur: https://www.murmurtts.com LoopMaker: https://tarun-yadav.com/loopmaker

Happy to go deeper on any specific part of the pipeline. Also curious what others here are doing to keep costs sane at volume, and what the current state of local AI visual gen looks like. That is the one piece I have not been able to move off cloud yet.

u/tarunyadav9761 — 2 days ago

The Complete AI Stack for TikTok Creators

It feels like creators are moving from using individual tools to building workflows.

Instead of: writing → filming → editing

People are using setups like:

ChatGPT → scriptsAI voice → narrationvideo tools → contentrepurposing tools → clips

It’s interesting because it changes how much content you can produce.

Curious if anyone here is actually using a full AI workflow?

Or are most people still just using one or two tools?

reddit.com
u/creator_stack — 1 day ago

Has anyone internationalized their content. on a separate channel?

I'm curious if anyone's built out channels that post the exactly same video, just with a translated voice - similar to how Mr. Beast does it.

Was it successful/worth doing?

reddit.com
u/Bonteq — 2 days ago