
















I just finished training the first (and definitely not the last) version of my new realism fine-tuning, trained on the Preview1 base. So it's still a WIP.
- HuggingFace: UltraReal_FineTune_Anima
- Civitai: UltraReal Fine-Tune Anima
- ComfyUI Workflow: Download JSON
Why Anima1? I chose it because it has a really solid grasp of fictional characters (from games, anime, etc.) and is genuinely great at 🌶️. It also handles anatomy well and is quite creative.
First Iteration Thoughts: For a first run, the result is actually kinda not bad (I honestly expected worse). However, it's still a work in progress and has some noticeable issues:
- Small details can still melt or blur.
- Faces tend to get distorted in wide or full-body shots (in workflow i use detailer)
- The style is a bit inconsistent right now — sometimes it hits realism better, and other times worse.
The Good Stuff & Generation Settings: On the bright side, the model understands specific styling incredibly well. If you prompt for things like "analog film photography with grain" or "high-res digital photography," it nails the exact look. Just keep in mind that this version is super prompt-sensitive.
For my generations, the base settings I used were er_sde + beta. However, I was using the custom RES4SHO pack, and the exact combo I used for the best results was hfx_stochastic_s2 + atan_detail.
What's Next? I’m going to try fine-tuning it further on a different dataset to see if I can iron out these flaws. If that doesn't fix it, I'll just train it entirely from scratch using an upgraded dataset.
P.S.: The prompt with Ereshkigal I stole from alili123 on Civit