
Create automated AI music videos with my full LTX 2.3 workflow for ComfyUI. FREE and LOCAL
Create automated AI music videos with my full LTX 2.3 workflow for ComfyUI FREE and LOCAL!
Sample videos that was created using my Workflow.
https://reddit.com/link/1t7ohql/video/u2ngig8bz40h1/player
https://reddit.com/link/1t7ohql/video/dleorv0u100h1/player
In this walkthrough, I show how the workflow takes a song, analyzes the timing, creates scene prompts from lyrics, and generates a finished music video using LTX 2.3.
The walkthrough video is too long to share in here so please go watch on YouTube: HERE
The workflow is split into two parts.
🎵 Workflow 1 handles audio upload, beat detection, scene timing, lyrics, style and theme, story idea, subjects and locations, and prompt generation.
🎬 Workflow 2 handles the actual video generation, including an image-to-video with zImage Turbo and LTX 2.3 workflow, and a text-to-video with LTX, LoRa workflow that both have advanced prompt controls, scene generation, Remake Mode, and final video stitching.
✨ This workflow is designed to reduce manual setup time while still giving you control over style, characters, camera motion, timing, seeds, LoRas, and final edits.
💡 For the best results, I recommend starting with the default settings first, then experimenting with LoRas, seeds, advanced settings, and Remake Mode as you get more comfortable.
⚙️ Requirements:
ComfyUI
LTX 2.3 models
Z-Image Turbo model
FFmpeg installed for audio stitching
My vrgamedevgirl custom nodes
Impact Pack custom node for auto-queue
llama-cpp-python
At least 16gb of VRAM - 12 "might" work but I have not tested it.
💬 Join my Discord for support, updates, beta features, and to share your work:
Discord Server
⬇️ Download my custom nodes and workflows:
Custom nodes: GitHub custom nodes, or use manager
Workflows are in here: Workflows
Hugging face : HERE
#ComfyUI #LTX #AIvideo #AIMusicVideo #TextToVideo #ImageToVideo #AIWorkflow #GenerativeAI