u/3DisMzAnoMalEE

Looking for authors and storytellers that wish to elevate their craft using AI

Looking for authors and storytellers that wish to elevate their craft using AI

Hello, makers! Sorry for the length, short version - Looking for authors and storytellers to test and give feedback on my new platform, 3DismzCUI.com  It’s free while in alpha-beta, and you own everything you generate, and all I'm asking in return is feedback and thoughts on how to improve. 

I'm a self-published sci-fi author (https://www.audible.com/pd/Voodoo-Child-Audiobook/B0C15SQTLN  https://www.barnesandnoble.com/w/voodoo-child-rc-kirkland/1140776480?ean=9798985459234 ) who had a vision to take my novel from Audible, the printed paperback and hardcover formats to the screen — Visual web novels, or comics and the like… But then I thought, why not take it further? Not just images, not just clips, but a complete end-to-end cinematic pipeline. I subscribed to a few providers over about a year's time, and paid a LOT of money to see what could be done, but nothing existed in the way that I thought it should be -  ‘just pick one and go’ type of thing. Several had some elements, a few had most, but none of the platforms had a “Here is my novel, give me the scene beats, push these to first and last frames, add the dialogs and audio, edit the videos and join them into My Movie” 

So I built one. 

3DismZ AI Pipeline Studio takes a chapter of prose, breaks it into detailed, controlled scene beats using Claude AI, generates character-consistent frames, produces lip-synced dialogue (in my Audible narrator's own voice), and assembles the whole thing into a finished movie — automatically. 

Ai is a phenomenal tool, but it's also locked behind thousands of dollars of GPU hardware most creators will never own. 3DismZ AI Pipeline Console runs the entire pipeline — image generation, video synthesis, speech-to-video lip sync, multi-speaker dialogue, first-to-last frame animation — on cloud GPU infrastructure. You bring the story. The platform brings the compute.

But here's what most AI video hosted platforms don't tell you: in ComfyUI, everything costs. You’re on the clock, paying, while in the workflow. Didn't like the motion? Re-run the workflow — that's credits. Want to swap a frame? Back into the queue. Adjust the prompt, change the seed, tweak the LoRA strength, try a different resolution — every single iteration spins up the GPU, burns the clock, and charges your account. You're paying to experiment.

3DismZ separates what costs *ComfyUI workflow processing) from what doesn't (everything  else). The cloud GPU runs once — to generate your clips and return the result into 3DismZ Studio, so you can see what, if anything needs to be changed or updated. Adjusting your prompts, camera angle commands, changing your images, seed or other elements are done OUTSIDE of the ComfyUI environment, inside the 3DismZ Studio, where there is no cost. You’re effectively ‘punched out, off the clock’ as you make decisions on what edits to make, should we fade out or cut directly to the next scene - all of this is done in a cost-free workspace in 3DismZ Studio. 

Once you are pleased with the changes and ready to have ComfyUI generate from your updates, we just send the newly updated prompts, seeds, and other criteria back into the ComfyUI workflows to make our new version, so now you're paying for cloud GPU again, since you’re actually running the ComfyUI workflows again, making the next iteration of the video. Everything created after that is yours for free. Reorder your beats in the timeline. Swap a start frame for a better one. Change a cut to a crossfade. Adjust the LoRA on a single beat and re-generate just that clip. Edit the dialogue text and re-run only that scene. Trim, rearrange, rebuild the whole scene assembly — all of it happens in your browser, on your time, at zero cost.

Send me a message if you’re interested in trying it out and giving feedback, TYVM!

You pay for GPU generation. You don't pay for creativity.

No hardware. No overhead. Just your vision.    3DismzCUI.com

https://preview.redd.it/mpa0yhmi7t0h1.png?width=1227&format=png&auto=webp&s=30fed3cd6ea2430bb32c92528e2127ac70c65a20

https://preview.redd.it/q3zoeiy63t0h1.png?width=1304&format=png&auto=webp&s=019b607f6748739661af86f1545aab0e0c8f64a3

https://preview.redd.it/8hpk9hz49t0h1.png?width=1207&format=png&auto=webp&s=f11cb368eaa47528635f02193437ca0dcad2e09b

https://preview.redd.it/w2s2942xpt0h1.png?width=981&format=png&auto=webp&s=3dd63779a51e147e90e6f132597e3f46a528983f

https://preview.redd.it/qzmqs3quzt0h1.png?width=1317&format=png&auto=webp&s=de4499bff0bc95e961ac6372d589bfd68f86c25f

https://preview.redd.it/e0w9vzu40u0h1.png?width=1129&format=png&auto=webp&s=9de48eb5e7902e3dcc1dbfc87be38203ac8f18cb

https://preview.redd.it/d79ln7sj0u0h1.png?width=1257&format=png&auto=webp&s=cfd8856f5feeabd3b0ff4447d6132c7e68f3270c

reddit.com
u/3DisMzAnoMalEE — 2 days ago

is this a thing.. sorry if it's a silly q

I would like to test an already trained lora, just a generic 3D stylized character, nothing even specific.. just to see the capability and viability of making some small advertisements. I was wondering if there is such a thing as -here isa lora for character X, Y and Z you can just wire in and go.. Or if there are already pre-assembled workflows that do this already.. THX!

reddit.com
u/3DisMzAnoMalEE — 3 days ago

https://preview.redd.it/erc6pkgwbuyg1.png?width=989&format=png&auto=webp&s=5105fe3f6ca371816af334e87f7c444d8dfac006

Hi makers! I'm looking for suggestions on what 'go to' tools would be useful for a start - finish pipeline to add to a 'grab it and go; toolset... Looking to test some options out to include, and I quite frankly don't know enough about the different variations and hardships that come with some of the nodes and models. There are what I've used successfully so far. THX! :)

reddit.com
u/3DisMzAnoMalEE — 12 days ago
▲ 2 r/ROCm

Several OOM crashes, days letting things sit, crash.. restart, let them sit.. OOM, cry to my 15 yr old daughter about how my rig sucks.. But wait! It finally worked OMG ..
So now, I have to ask, what model/vae, etc SHOULD i be using with AMD to get this in less than 1/2 a day?? I have to assume I just started with the worst possible model/workflow..
Using ltx-2.3-22b-distilled-fp8 and gemma_3_12B_it_fp4_mixed

https://reddit.com/link/1svwjin/video/besq62cehgxg1/player

reddit.com
u/3DisMzAnoMalEE — 19 days ago