I built a free tool that installs ComfyUI on any cloud GPU in one command and saves your whole setup between sessions. Open source.
Got frustrated reinstalling ComfyUI every time I rented a GPU. Custom nodes, models, configs every session started with 45 minutes of setup before I could actually generate anything. Docker images got stale fast and different providers have different base images so nothing was truly portable.
So I built swm. It's a CLI that handles GPU rental and setup across 10 cloud providers.
For ComfyUI specifically:
- swm gpus -g a100 --max-price 2.00 --sort price shows you the cheapest GPU across RunPod, Vast ai, Lambda, and 7 others
- swm pod create — spins up whatever's cheapest
- swm setup install comfyui — installs ComfyUI on the pod
- Your whole workspace (custom nodes, models, outputs, everything) syncs to S3 so next session you just pull and it's all there. No starting from scratch every time.
The other thing that's saved me a lot of money is the lifecycle guard. It watches GPU utilization and if nothing's happening for 30 minutes (configurable), it saves your workspace and terminates the instance. I used to fall asleep or get distracted mid-session and wake up to stupid bills. Doesn't happen anymore.
It also works with vLLM, Ollama, Open WebUI, SwarmUI, and Axolotl if you do more than just SD.
Free, open source, Apache 2.0. pipx install swm-gpu
Site: https://swmgpu.com GitHub: https://github.com/swm-gpu/swm
Curious if anyone else has been dealing with the same setup-every-time problem or if I'm the only one who was doing it wrong lol. Open to feedback on what to build next.