u/simple250506

[Suggestion] Improve the visibility of resolution frames

[Suggestion] Improve the visibility of resolution frames

recommend view the image at its original size. There is a color picker in the upper right corner of the prompt field.

Dear Developers,

I really enjoy using this app, but the resolution frame is difficult to see, and I always struggle to fit images within the frame. Often, I unintentionally start generating in inpaint mode without realizing the image is slightly overflowing the frame.

If the app could make the app's background a single color, display the frame with clear and simple lines, and allow the user to choose the line color using the picker in the upper right corner of the prompt field, I believe it would make working with images of any color much easier, more efficient, and reduce mistakes.

I would appreciate your consideration.

Edit 1: the sample image prioritized frame clarity, which may have made it difficult to understand, but this is intended to improve work efficiency in cases where the aspect ratio of the settings and the image are different and manual alignment is required.

reddit.com
u/simple250506 — 5 days ago

Deleting Version History can take over 20 minutes

When deleting a large number of videos and images at once from the Version History window, it can take an unusually long time. For example, in the case below, it took 28 minutes for the batch deletion to complete.

  • Past Edits: 155
  • Generated Image: 99
  • Breakdown: Roughly half are LTX2.3 and Wan2.2 videos, and the other half are SeedVR2 and Klein 9B images.

■Environment

  • Mac mini M4, 64GB, 20-Core GPU / OS 15.4.1
  • Draw Things: 1.20260430.0

■Note

  • This is a different phenomenon from the one previously reported. The delay is significantly more pronounced.
  • The conditions for reproduction are unknown. Even with a similar total number of deletions (and mostly videos), sometimes the deletion takes a long time (over 20 minutes) and sometimes it doesn't (within 5 minutes).
  • This phenomenon has likely been occurring since I started using LTX2.3. However, the files I'm trying to delete are almost always not limited to LTX2.3, as mentioned above.
  • In all cases, deleting in batches of 9 files instead of all at once completes the process at a normal speed.
  • I suspect that the sqlite3 file size (several MB to 11 GB) is a contributing factor, but I haven't verified this.

■Temporary solution

I've decided to delete the project itself when the history accumulates. In fact, I've found this approach suits me better.

reddit.com
u/simple250506 — 8 days ago

SeedVR2 Quick Intro

Text Guidance 1.0,1.5

SeedVR2 is supported in version 1.20260430.0. To my knowledge, this is a highly-rated upscaling model.

■ Flow

  1. After downloading the model, select it in the Model section. (Not the Upscaler section)
  2. Set the target resolution in the Image Size section.
  3. Drag and drop the image into the main window and start generation.

■ Settings

  • Strength: 100% (Reducing it will blur the image)
  • Steps: 1 (Anything greater than 1 will cause loss of detail)
  • Text Guidance: 1.0 (Increasing it will increase noise and a sense of depth, but 2 may be the limit)
  • Sampler: No effect (probably)
  • Shift: No effect
  • Prompt: No effect

■ Reference Speed ​​& Memory Usage

SeedVR2 7B 8bit 512×512➡︎2048×2048
Mac mini M4, 64GB, 20-Core GPU / OS 15.4.1

  • Peak Memory: 6.9GB
  • Time: 40s

■Note

  • Currently, only images are supported.
  • Batch upscaling of multiple images is not possible.
  • If there is a gap between the Image Size frame and the image, inpainting will occur, but nothing will be drawn in that area.
  • Normally, it cannot be used in conjunction with other models as an upscaler. (Is it possible with a script?)

■Configuration sample

{"seed":3349378910,"sharpness":0,"steps":1,"batchSize":1,"height":2048,"width":2048,"strength":1,"cfgZeroStar":false,"faceRestoration":"","causalInferencePad":0,"shift":5,"controls":[],"maskBlurOutset":5,"preserveOriginalAfterInpaint":false,"refinerModel":"","seedMode":3,"guidanceScale":1.5,"sampler":17,"loras":[],"maskBlur":5,"tiledDecoding":false,"tiledDiffusion":false,"cfgZeroInitSteps":1,"model":"seedvr2_7b_q8p.ckpt","hiresFix":false,"batchCount":1,"upscaler":""}

Please point out any points you disagree with.

reddit.com
u/simple250506 — 10 days ago

https://reddit.com/link/1t35lgv/video/puh9cfqwa1zg1/player

I compared the differences between two types of 4-step LoRA (High) for Wan 2.2 I2V. After several tests, Kijai's v1030 tended to produce more motion. This should be effective if you want to increase motion even slightly.

■LoRA

■Prompt

While speaking rapidly with an excited expression, he suddenly and swiftly stands up right on top of his chair, balancing himself with both feet on the seat.A clear scene from a movie.

■Generation settings(configuration)

{"causalInferencePad":0,"upscaler":"","tiledDecoding":false,"seed":4262634612,"strength":1,"cfgZeroInitSteps":0,"cfgZeroStar":false,"teaCache":false,"maskBlurOutset":0,"faceRestoration":"","loras":[{"mode":"base","file":"wan_v2.2_a14b_hne_i2v_lightning_251022_lora_f16.ckpt","weight":1},{"mode":"refiner","file":"wan_v2.2_a14b_lne_i2v_lightning_251022_lora_f16.ckpt","weight":1}],"batchCount":1,"sharpness":0,"refinerStart":0.10000000000000001,"sampler":17,"model":"wan_v2.2_a14b_hne_i2v_q8p.ckpt","height":640,"batchSize":1,"numFrames":81,"preserveOriginalAfterInpaint":false,"steps":4,"width":512,"maskBlur":1.5,"tiledDiffusion":false,"guidanceScale":1,"refinerModel":"wan_v2.2_a14b_lne_i2v_q8p.ckpt","compressionArtifactsQuality":43.100000000000001,"compressionArtifacts":"disabled","shift":5,"seedMode":3,"hiresFix":false,"controls":[]}

reddit.com
u/simple250506 — 11 days ago

https://preview.redd.it/3nzl0kd2mbyg1.png?width=1200&format=png&auto=webp&s=b8d89ad9ac454c04ce3fefc5481f6497ce785501

Until a month ago, I thought the Draw Things HTTP API was completely irrelevant to me. However, after learning that I could retrieve the generation settings (+ prompt) via the HTTP API, and realized how useful this is for creating companion apps for Draw Things.

For example...

  • An app that renames generated files to a filename that includes any desired generation settings.
  • An app that records the peak memory usage for each model and generation setting.
  • An app that automatically saves generation settings as a text file in JSON format readable by Draw Things, along with prompts.

I was able to easily create these apps by instructing an AI, even with zero programming knowledge.However, the HTTP API is turned off when Draw Things is restarted, so it needs to be turned on each time Draw Things is launched to work with apps.

Even if the HTTP API isn't improved in the future, this feature is still very useful, so I sincerely hope it remains as it is and doesn't disappear.

reddit.com
u/simple250506 — 15 days ago

https://preview.redd.it/j2pfoxpz50yg1.png?width=1000&format=png&auto=webp&s=20ec8da06f023d0ae8238d003088e3910a6271b5

This graph summarizes the peak memory usage of LTX 2.3 with different JIT Weights Loading settings in Draw Things. All other settings are the same. The second row of entries shows the approximate generation time.

The distilled model appears to be able to be generated with 16GB of memory. The [dev] model appears to be able to be generated with 24GB of memory if Never is not selected.

■Model

  • LTX-2.3 22B [dev] + LTX-2.3 22B [distilled] 1.1 LoRA(7GB)
  • LTX-2.3 22B [distilled] 1.1

■Specs

Mac mini M4, 64GB, 20-Core GPU / OS 15.4.1

■Draw Things

ver.1.20260418

reddit.com
u/simple250506 — 16 days ago