u/White_Dragon_0

Image 1 — Question on exclusives
Image 2 — Question on exclusives
Image 3 — Question on exclusives

Question on exclusives

Hi all! Are the exclusives or any chance for international customers? Thanks

u/White_Dragon_0 — 2 days ago
▲ 8 r/LTXvideo+1 crossposts

Hi all,
I’m using LTX 2.3 in ComfyUI with the workflow from RuneXX:

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main

Setup:

  • RTX 5090 32GB
  • 64GB RAM

I’m running image-to-video with:

  • first frame conditioning (FF)
  • first + last frame conditioning (FLF-style workflow)

Issue

I’m consistently getting strong identity drift during generation.

This behavior occurs in both:

  • first frame only (FF) workflows
  • first + last frame (FLF) workflows

Even when using a strong reference image:

  • The original image is correctly visible only in the first frame (when used as conditioning)
  • Immediately after the first frame, the face starts to deform and change shape
  • As the sequence progresses, the model increasingly reconstructs a different identity
  • First frame and last frame influence is present but not stable or persistent

What I tested

  • different samplers
  • CFG tuning
  • frame count variations (low/high)
  • FF vs FLF conditioning
  • different guidance strengths

Result is always the same:
→ identity is not preserved across time

Main question

What is the correct way to enforce consistent identity across a full video sequence in LTX 2.3 I2V?

More specifically:

  • Is there a proper method to maintain identity continuity beyond the first frame?
  • Should identity be enforced via a different conditioning strategy (beyond FF / FLF)?
  • Is there a missing identity/face encoder or adapter step in this workflow?
  • Or is LTX 2.3 inherently not designed for persistent identity locking across frames?

Summary of questions

  1. Why does identity only survive the first frame and then degrade immediately (both in FF and FLF)?
  2. What is the correct method to enforce identity consistency across frames in LTX 2.3?
  3. How do you maintain identity continuity across multiple clips / generations?
  4. Are FF / FLF conditioning approaches sufficient for identity locking, or is another mechanism required?
  5. Is there a known best-practice workflow for stable face consistency in ComfyUI LTX?

Media

  • Reference image (input)
  • Generated frame comparison (output)
  • Video (MP4)

Images (example)

FirstFrame

Inconsistent

LastFrame

Video (example)

Video

u/White_Dragon_0 — 7 days ago
▲ 2 r/LTXvideo+1 crossposts

Hi all,
I’m using LTX 2.3 in ComfyUI with the workflow from RuneXX:

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main

Setup:

  • RTX 5090 32GB
  • 64GB RAM

I’m running image-to-video with:

  • first frame conditioning
  • last frame conditioning
  • sometimes both (FLF-style workflow)

Issue

I’m consistently getting strong identity drift during generation.

Even when using a strong reference image:

  • The original image is correctly visible only in the first frame (when used as conditioning)
  • Immediately after the first frame, the face starts to deform and change shape
  • As the sequence progresses, the model increasingly reconstructs a different identity
  • First frame and last frame influence is present but not stable or persistent

What I tested

  • different samplers
  • CFG tuning
  • frame count variations (low/high)
  • FLF vs single-frame conditioning
  • different guidance strengths

Result is always the same:
→ identity is not preserved across time

Main question

What is the correct way to enforce consistent identity across a full video sequence in LTX 2.3 I2V?

More specifically:

  • Is there a proper method to maintain identity continuity beyond the first frame?
  • Should identity be enforced via a different conditioning strategy (beyond FLF)?
  • Is there a missing identity/face encoder or adapter step in this workflow?
  • Or is LTX 2.3 inherently not designed for persistent identity locking across frames?

Summary of questions

  1. Why does identity only survive the first frame and then degrade immediately?
  2. What is the correct method to enforce identity consistency across frames in LTX 2.3?
  3. How do you maintain identity continuity across multiple clips / generations?
  4. Is FLF conditioning sufficient for identity locking, or is another mechanism required?
  5. Is there a known best-practice workflow for stable face consistency in ComfyUI LTX?

Media

  • Reference image (input)
  • Generated frame comparison (output)
  • Video (MP4)

Images (example)

Original

Inconsistent

Video (example)

Video

u/White_Dragon_0 — 8 days ago

Hi all,
I’m using LTX 2.3 in ComfyUI with the workflow from RuneXX:

https://huggingface.co/RuneXX/LTX-2.3-Workflows/tree/main

Setup:

  • RTX 5090 32GB
  • 64GB RAM

I’m running image-to-video with:

  • first frame conditioning
  • last frame conditioning
  • sometimes both (FLF-style workflow)

Issue

I’m consistently getting strong identity drift during generation.

Even when using a strong reference image:

  • The original image is correctly visible only in the first frame (when used as conditioning)
  • Immediately after the first frame, the face starts to deform and change shape
  • As the sequence progresses, the model increasingly reconstructs a different identity
  • First frame and last frame influence is present but not stable or persistent

What I tested

  • different samplers
  • CFG tuning
  • frame count variations (low/high)
  • FLF vs single-frame conditioning
  • different guidance strengths

Result is always the same:
👉 identity is not preserved across time

Main question

What is the correct way to enforce consistent identity across a full video sequence in LTX 2.3 I2V?

More specifically:

  • Is there a proper method to maintain identity continuity beyond the first frame?
  • Should identity be enforced via a different conditioning strategy (beyond FLF)?
  • Is there a missing identity/face encoder or adapter step in this workflow?
  • Or is LTX 2.3 inherently not designed for persistent identity locking across frames?

Summary of questions

  1. Why does identity only survive the first frame and then degrade immediately?
  2. What is the correct method to enforce identity consistency across frames in LTX 2.3?
  3. How do you maintain identity continuity across multiple clips / generations?
  4. Is FLF conditioning sufficient for identity locking, or is another mechanism required?
  5. Is there a known best-practice workflow for stable face consistency in ComfyUI LTX?

Media

  • Reference image (input)
  • Generated frame comparison (output)
  • Video (MP4)

Video (example)

https://drive.google.com/file/d/1H2xySYNjE2iUYdxtvuLJMyyMOIllGQmB/view?usp=sharing

drive.google.com
u/White_Dragon_0 — 8 days ago