LTX just dropped an HDR IC-LoRA beta: EXR output, built for production pipelines
Finally. Someone in the open-source video space actually looked at a professional color grading suite instead of just chasing internet likes.
I’ve been messing with LTX-2.3 for a while, and it’s been great for personal projects—but once you try to slot AI video into a real pipeline, the SDR limitations hit you like a brick wall. Most of these models output footage that looks okay on a phone, but try to bring that into DaVinci Resolve and push the exposure or shadows? It falls apart instantly. Banding city.
LTX just dropped an HDR IC-LoRA beta that is explicitly built to output 16-bit float EXRs.
Here is why this actually matters for us:
It’s using LogC3-encoded HDR latents. You aren't just getting a 'bright' video; you’re getting actual scene-linear data. The research notes confirm the pipeline: VAE encoder -> noise -> DiT -> LogC3 HDR latents -> Inverse LogC3 -> Scene-linear float16 EXR.
It’s not just a lab demo. They had studios like Magnopus and Asteria breaking the tech before shipping it. If it’s hitting LED walls for virtual production, the dynamic range has to hold up under scrutiny, not just look 'vibrant' on a social media feed.
The workflow is actually manageable in ComfyUI. I’ve been running the IC-LoRA alongside the distill LoRA, and the highlight recovery is genuinely impressive. Overexposed shots that would usually be clipped white are actually pulling detail back out.
I’m curious to see how this plays with other temporal consistency LoRAs. The biggest hurdle for local video models has always been the bridge to professional post-production. Are we finally at the point where we can replace raw plate footage with generated elements that actually match the color science of a cinema camera?
If anyone is running this in a production workflow already, how are you handling the VRAM overhead when chaining the HDR LoRA with your standard upscaling nodes? My 3090 is sweating, but the output EXRs are actually grading like real footage.
Interested to see if this forces other models to stop ignoring the 16-bit float requirement.