u/Dazzling_Theory_3316

Sharing this with the group.

I’ve been working on tokenizer generation, fine-tuning, and dataset generation on a local Ubuntu headless server. One thing that kept getting in the way was telemetry — I couldn’t find anything that showed CPU, RAM, GPU, and VRAM usage together in a clean way.

That makes optimizing things like vLLM a lot harder than it needs to be.

So I built a low-noise telemetry TUI that puts everything in one place. It’s open source if anyone finds it useful.

Stack:
- Ubuntu headless server
- vLLM backend
- Dual RTX 5080
- Ryzen 9950X3D

You can find the full documentation and repo here:

https://github.com/magplumber/ml-pipeline-telemetry

u/Dazzling_Theory_3316 — 19 days ago