r/drawthingsapp

1.20260430.0 was released in iOS / macOS AppStore a few minutes ago (https://static.drawthings.ai/DrawThings-1.20260430.0-09f170fc.zip). This version brings:

  1. Support SeedVR2 as one of the diffusion model for 1-step upscaling work.
  2. Fix Anima LLM adapter related issues.
  3. Switch to better and newer causal / mask attention kernels.
  4. Support export LoRAs for Qwen, Z Image, FLUX.2 and ERNIE Image models.
  5. Imported Z Image models now runs at BF16 precision.

gRPCServerCLI and draw-things-cli both are updated to 1.20260430.0 with above related updates.

reddit.com
u/liuliu — 13 days ago

Trying to figure out prompting for ZIT in Draw Thing running locally. I figure that this combination is about the limit of my technical ability (and it may exceed it).

I have been successful in the past prompting images using what I would call “natural language.” Kind of write what you visualize and try to organize it in some form of prompt hierarchy. But do run up against challenges.

On a whim, I went down the rabbit hole with Gemini over the last few days having it write a series of prompts that although they contain nudity, stayed within its censors limits. The text of Gemini’s prompts are anything but what I would call “natural language.” I was trying to get a handle on multiple people, intertwining limbs, different geometric planes, etc. and to accomplish that (and the images prompted were very good) Gemini used terms what I would say are terms and phrasing that I would never think of in a million years. Certainly not MY natural language.

So now I’m confused. Can anybody help me understand what the real prompting requirement is for ZIT in Draw Things?

reddit.com
u/ng5554 — 11 days ago

SeedVR2 Quick Intro

Text Guidance 1.0,1.5

SeedVR2 is supported in version 1.20260430.0. To my knowledge, this is a highly-rated upscaling model.

■ Flow

  1. After downloading the model, select it in the Model section. (Not the Upscaler section)
  2. Set the target resolution in the Image Size section.
  3. Drag and drop the image into the main window and start generation.

■ Settings

  • Strength: 100% (Reducing it will blur the image)
  • Steps: 1 (Anything greater than 1 will cause loss of detail)
  • Text Guidance: 1.0 (Increasing it will increase noise and a sense of depth, but 2 may be the limit)
  • Sampler: No effect (probably)
  • Shift: No effect
  • Prompt: No effect

■ Reference Speed ​​& Memory Usage

SeedVR2 7B 8bit 512×512➡︎2048×2048
Mac mini M4, 64GB, 20-Core GPU / OS 15.4.1

  • Peak Memory: 6.9GB
  • Time: 40s

■Note

  • Currently, only images are supported.
  • Batch upscaling of multiple images is not possible.
  • If there is a gap between the Image Size frame and the image, inpainting will occur, but nothing will be drawn in that area.
  • Normally, it cannot be used in conjunction with other models as an upscaler. (Is it possible with a script?)

■Configuration sample

{"seed":3349378910,"sharpness":0,"steps":1,"batchSize":1,"height":2048,"width":2048,"strength":1,"cfgZeroStar":false,"faceRestoration":"","causalInferencePad":0,"shift":5,"controls":[],"maskBlurOutset":5,"preserveOriginalAfterInpaint":false,"refinerModel":"","seedMode":3,"guidanceScale":1.5,"sampler":17,"loras":[],"maskBlur":5,"tiledDecoding":false,"tiledDiffusion":false,"cfgZeroInitSteps":1,"model":"seedvr2_7b_q8p.ckpt","hiresFix":false,"batchCount":1,"upscaler":""}

Please point out any points you disagree with.

reddit.com
u/simple250506 — 10 days ago
▲ 16 r/drawthingsapp+1 crossposts

I’ve noticed over the last year or so that the image2image scene has been dominated by full image edit models like Qwen, Kontext, Klein. I still prefer to do traditional mask based inpainting instead of feeding the whole image into the model and it changing every pixel. I’ve been using sd1.5 and sdxl models for this, but you can tell they are getting old. Skin looks kind of plasticy, hands look like sd hands obviously. Are there any modern models that do inpainting but have the insane photorealism performance that z image or flux models have? I’m open to custom workflows that use models that aren’t made specifically for inpainting if that’s the only option.

reddit.com
u/baben7 — 7 days ago

Just started using LTX 2.3. What’s the best way to preview a prompt before spending 15+ minutes on generating bad video?

I’ve been playing around with Draw Things for a couple months and decided to try video generation. With images, I was able to set the resolution pretty low and the preview would appear quickly and I could hit stop and fix the prompt and retry until it looked right. Then I would set the resolution higher and get my desired image.

With LTX 2.3, even if I set the frame gen low and resolution low, it takes 15+ minutes to fully generate the video with the recommended settings from the app. I’m not sure what else to change but it’s taking me hours of tweaking my prompts to get the right output. Another issue is I keep making mistakes with the audio track and don’t know it until the video is done generating.

Is there any way to set Draw Things up so it can only generate audio and when I get the audio perfect I can add the video? Also what’s the best way to test out prompts the quickest way possible? I don’t mind if the final product takes hours to gen but it’s so tedious making a tiny change to the prompt and then waiting so long if the change was positive or negative.

reddit.com
u/down_with_cats — 7 days ago

App undoes changes on final sample

Hi,

I’ve been editing images with some success until recently. Using Flux 2 Klien 4b no loras.
The preview shows the changes being made (which are intended) but on the final sample it has started reverting the image back to the original. I’m using the same setup on both my Mac mini and iPad mini and the problem is on both devices.

Any ideas?

(Edited for spelling mistake)

reddit.com
u/cajewiwag — 8 days ago

I have noticed that some of the image heavy subs have a bot generated comment stating that the image prompt would be greatly appreciated. But I never see any. Is this the equivalent of giving the bot (moderator) the middle finger, or is there some way to view the provided prompt that I just don’t know about (which would not be surprising).

reddit.com
u/ng5554 — 10 days ago

Deleting Version History can take over 20 minutes

When deleting a large number of videos and images at once from the Version History window, it can take an unusually long time. For example, in the case below, it took 28 minutes for the batch deletion to complete.

  • Past Edits: 155
  • Generated Image: 99
  • Breakdown: Roughly half are LTX2.3 and Wan2.2 videos, and the other half are SeedVR2 and Klein 9B images.

■Environment

  • Mac mini M4, 64GB, 20-Core GPU / OS 15.4.1
  • Draw Things: 1.20260430.0

■Note

  • This is a different phenomenon from the one previously reported. The delay is significantly more pronounced.
  • The conditions for reproduction are unknown. Even with a similar total number of deletions (and mostly videos), sometimes the deletion takes a long time (over 20 minutes) and sometimes it doesn't (within 5 minutes).
  • This phenomenon has likely been occurring since I started using LTX2.3. However, the files I'm trying to delete are almost always not limited to LTX2.3, as mentioned above.
  • In all cases, deleting in batches of 9 files instead of all at once completes the process at a normal speed.
  • I suspect that the sqlite3 file size (several MB to 11 GB) is a contributing factor, but I haven't verified this.

■Temporary solution

I've decided to delete the project itself when the history accumulates. In fact, I've found this approach suits me better.

reddit.com
u/simple250506 — 8 days ago

DrawThings+ has been down since last night (12 hours ago)

Good morning, everyone!

I’ve been using DrawThings+ for a while now to render images on my underpowered Mac M2. I’m very happy with it—except when, like today, it starts crashing again. In the past, DrawThings+ simply wouldn’t work in situations like this. Since last night, the app has even been freezing completely. Everything was working fine this afternoon! I’m based in Spain. Even when I connect to another country like the U.S. via VPN, it doesn’t help, whereas in the past that often resolved issues. Are others here experiencing these issues as well, and when can we expect a fix for this problem?

reddit.com
u/Theomystiker — 6 days ago

[Suggestion] Improve the visibility of resolution frames

recommend view the image at its original size. There is a color picker in the upper right corner of the prompt field.

Dear Developers,

I really enjoy using this app, but the resolution frame is difficult to see, and I always struggle to fit images within the frame. Often, I unintentionally start generating in inpaint mode without realizing the image is slightly overflowing the frame.

If the app could make the app's background a single color, display the frame with clear and simple lines, and allow the user to choose the line color using the picker in the upper right corner of the prompt field, I believe it would make working with images of any color much easier, more efficient, and reduce mistakes.

I would appreciate your consideration.

Edit 1: the sample image prioritized frame clarity, which may have made it difficult to understand, but this is intended to improve work efficiency in cases where the aspect ratio of the settings and the image are different and manual alignment is required.

reddit.com
u/simple250506 — 5 days ago

Image-to-Image inpainting Explained — A Powerful Technique You Need to Know!

I think image-to-image inpainting is an extremely practical and powerful technique. If you haven’t used it yet, I highly recommend giving it a try!

The emergence of stronger and more efficient base models like Ernie Image Turbo has taken image-to-image inpainting to the next level.

Even though its editing model did not launch as scheduled at the end of April, you can still achieve many highly practical repair tasks through image-to-image workflows.

youtube.com
u/CrazyToolBuddy — 4 days ago

Eros or sulphur models

Has anyone managed to get any of these models to work? I thought drawthings supports fp8? I love this app, it does so many things on my 18gb M3

reddit.com
u/sotheysayit — 3 days ago

Cómo realizar imágenes de imagen a imagen?

​

He intentado en la app desde iphone tantas veces y le he preguntado a la IA cómo hacer esto, después de todas estas pruebas y errores frustrantes, y tutoriales de Youtube, solo consigo:

\- La ilustración original intacta después de cada generación de "imagen a imagen". Ya probado con diferentes porcentajes de Strength.

\- Las imágenes no relacionadas con la ilustración (solo basadas en mi prompt) generadas por el método de referencia "Moodboard". Ya probado con diferentes configuraciones de porcentaje.

En todos los modelos.. incluso nsfw y salen puras jaladas.. al colocar la imagen desde el lienzo la coloque dónde la coloque o me entrega la misma imagen o me entregaste otra Pero encimada .. es estresante está aplicación

reddit.com
u/OutrageousAd7052 — 3 days ago

Model does not generate an image... :\

Hi Folks, asking here cuz I didn't see anything related in the Troubleshooting wiki...

I downloaded a model from civitai 'cyberrealisticXL_v100' and installed as Model, as instructed for Offline generation, it downloaded a checkpoint (didn't expect that...but anyway...) and gave me positive response it was ready.

When I entered a simple prompt, ran it's 20 steps, then nothing happened, I just see the initial transparent checkered background as before, I don't see any images generated or saved anywhere.

Any ideas what gives? Thanks in advance 🙏

--FIXED--

reddit.com
u/Original_Vacation655 — 3 days ago

Is there a way to do masked inpainting with ZIT in draw things?

I've seen a few guides on how to do masked inpainting with z image turbo for comfy, but they never seem to run well on my mac. ZIT runs great on my mac for t2i in draw things, but I don't know how to do inpainting with it. The ui is a little harder to understand than comfy for that purpose. Has anyone done masked inpainting with z image in draw things before and is willing to share any tips?

reddit.com
u/baben7 — 3 days ago

macOS 26.5 broke the JIT Weight Loading and generation speed

As the title says, JIT Weight Loading is not working at all in macOS 26.5.

Also, overall generation speed of models is slowed down.

What's happening to macOS 26.5?

reddit.com
▲ 12 r/drawthingsapp+1 crossposts

What are your thoughts on these portraits made by me ? Here are few of my portraits that I did during summer break . I used charcoal mainly and graphite !!

Alll of them are my favourite characters and I love drawing them ! :) the Sunil chettri one was a commissioned artwork . And I’m a big fan of penn badgley too !! I can draw your favourite characters or people too !! Dm me :) I would love to draw for you guys ♥️

u/Commercial_Fox_5324 — 1 day ago