u/FortranUA

Image 1 —
Image 2 —
Image 3 —
Image 4 —
Image 5 —
Image 6 —
Image 7 —
Image 8 —
Image 9 —
Image 10 —
Image 11 —
Image 12 —
Image 13 —
Image 14 —
Image 15 —
Image 16 —
Image 17 —

I just finished training the first (and definitely not the last) version of my new realism fine-tuning, trained on the Preview1 base. So it's still a WIP.

Why Anima1? I chose it because it has a really solid grasp of fictional characters (from games, anime, etc.) and is genuinely great at 🌶️. It also handles anatomy well and is quite creative.

First Iteration Thoughts: For a first run, the result is actually kinda not bad (I honestly expected worse). However, it's still a work in progress and has some noticeable issues:

  • Small details can still melt or blur.
  • Faces tend to get distorted in wide or full-body shots (in workflow i use detailer)
  • The style is a bit inconsistent right now — sometimes it hits realism better, and other times worse.

The Good Stuff & Generation Settings: On the bright side, the model understands specific styling incredibly well. If you prompt for things like "analog film photography with grain" or "high-res digital photography," it nails the exact look. Just keep in mind that this version is super prompt-sensitive.

For my generations, the base settings I used were er_sde + beta. However, I was using the custom RES4SHO pack, and the exact combo I used for the best results was hfx_stochastic_s2 + atan_detail.

What's Next? I’m going to try fine-tuning it further on a different dataset to see if I can iron out these flaws. If that doesn't fix it, I'll just train it entirely from scratch using an upgraded dataset.

P.S.: The prompt with Ereshkigal I stole from alili123 on Civit

u/FortranUA — 11 days ago