u/AgeNo5351

Image 1 — Joy-Image-Edit released
Image 2 — Joy-Image-Edit released
Image 3 — Joy-Image-Edit released
Image 4 — Joy-Image-Edit released
Image 5 — Joy-Image-Edit released
Image 6 — Joy-Image-Edit released
Image 7 — Joy-Image-Edit released
Image 8 — Joy-Image-Edit released
Image 9 — Joy-Image-Edit released
Image 10 — Joy-Image-Edit released
Image 11 — Joy-Image-Edit released
Image 12 — Joy-Image-Edit released
Image 13 — Joy-Image-Edit released
Image 14 — Joy-Image-Edit released
Image 15 — Joy-Image-Edit released
🔥 Hot ▲ 208 r/StableDiffusion

Joy-Image-Edit released

Model: https://huggingface.co/jdopensource/JoyAI-Image-Edit
paper: https://joyai-image.s3.cn-north-1.jdcloud-oss.com/JoyAI-Image.pdf
Github: https://github.com/jd-opensource/JoyAI-Image

JoyAI-Image-Edit is a multimodal foundation model specialized in instruction-guided image editing. It enables precise and controllable edits by leveraging strong spatial understanding, including scene parsing, relational grounding, and instruction decomposition, allowing complex modifications to be applied accurately to specified regions.

JoyAI-Image is a unified multimodal foundation model for image understanding, text-to-image generation, and instruction-guided image editing. It combines an 8B Multimodal Large Language Model (MLLM) with a 16B Multimodal Diffusion Transformer (MMDiT). A central principle of JoyAI-Image is the closed-loop collaboration between understanding, generation, and editing. Stronger spatial understanding improves grounded generation and contrallable editing through better scene parsing, relational grounding, and instruction decomposition, while generative transformations such as viewpoint changes provide complementary evidence for spatial reasoning.

u/AgeNo5351 — 13 hours ago