r/CLI

Image 1 — Gloomberb - open-source finance terminal
Image 2 — Gloomberb - open-source finance terminal
Image 3 — Gloomberb - open-source finance terminal
Image 4 — Gloomberb - open-source finance terminal
🔥 Hot ▲ 332 r/CLI

Gloomberb - open-source finance terminal

Hi!

I made this for fun over the past few days, wanted to try my hand at making a TUI.

Github: https://github.com/vincelwt/gloomberb

It's inspired by Bloomberg terminal but everything is extendable with plugins.

The biggest challenge has been building the charts, requiring to dive deep into Kitty graphics (if you have a Kitty-compatible terminal it'll look better!).

And shoutout to OpenTUI (by the makers of Opencode) which has been a great help to build the layout.

It also supports placing trades, though I only added support IBKR (since that's what I use).

PRs are welcome!

Let me know what you think.

u/tim_toum — 23 hours ago
🔥 Hot ▲ 138 r/CLI+1 crossposts

vimyt - a vim TUI YouTube Music player

I missed having a TUI for YouTube Music with nice vim keybindings and radio mix generation, so I tried to create my own that fits my workflow more.

vimyt uses yt-dlp and mpv under the hood

Github repo: https://github.com/Sadoaz/vimyt

Would love feedback and suggestions :)

u/Ghqsthero — 17 days ago
Helius v0.1.4: Linux release, Docker support, and I’d like feedback for what to build next
▲ 18 r/CLI+1 crossposts

Helius v0.1.4: Linux release, Docker support, and I’d like feedback for what to build next

I’ve been working on Helius, a local first personal finance tracker built in Rust with a CLI/TUI and SQLite.

I recently pushed v0.1.4 to GitHub, and the main focus of this update was improving how people can actually run it outside my own setup.

What changed in v0.1.4:

- Linux x86_64 release support

- Docker support

repo: https://github.com/STVR393/helius-personal-finance-tracker

If anyone here tries it, I’d really like your feedback. I want the next updates to be driven by user feedback rather than just my own assumptions.

Even short comments like “this flow is confusing” or “this needs X before I’d use it” would help a lot. Thank you for your time! <3

u/Pupzee — 8 hours ago
A hardware monitor in C++ that also watches Docker, open ports, web servers, cron jobs and GPU usage per process
▲ 33 r/linux+1 crossposts

A hardware monitor in C++ that also watches Docker, open ports, web servers, cron jobs and GPU usage per process

A lightweight hardware monitor that gives you a full picture of your system in one place.
* CPU, memory, disk, and network monitoring
* Docker container tracking
* Open ports visibility
* Web server monitoring (Apache, Nginx)
* Cron job tracking
* Per-process GPU usage
* Single binary, no dependencies, just drop and run
github : github.com/sdk445/hmon

u/S-for-seeker-9526 — 13 hours ago
A headless web browser for AI agents with JS - (single binary, no dependencies, no fees, local)
▲ 35 r/ClaudeAI+1 crossposts

A headless web browser for AI agents with JS - (single binary, no dependencies, no fees, local)

browser39, a headless web browser designed specifically for AI agents. It converts web pages to token-optimized Markdown locally, runs JavaScript, manages cookies and sessions, queries the DOM, and fills forms. Single binary, no external browser needed.

- MCP (stdio + HTTP) for Claude Desktop, Claude Code, and any MCP client

- JSONL file-based IPC for any language (Python, Node, Rust, shell)

- CLI for one-shot fetch

- Calude Code CLI/Dekstop Plugin

Features: content preselection, JavaScript execution, encrypted session persistence, form filling, auth profiles that keep credentials out of the LLM context, DOM queries via CSS selectors and JS expressions.

Drop-in examples included for Python, TypeScript, and Rust with LLM tool definitions ready to copy-paste.

GitHub: https://github.com/alejandroqh/browser39

Crate: https://crates.io/crates/browser39

Feedback welcome.

u/aq-39 — 22 hours ago
▲ 6 r/SideProject+3 crossposts

Devly v2.0.0 - CLI for 50+ dev tools on macOS (Base64, JSON, hashing, UUID, etc.)

Just added CLI support to Devly.

brew install aarush67/tap/devlycli

devly base64 "Hello World"
cat data.json | devly jsonformat &gt; pretty.json
echo "password" | devly hash
devly cron "0 9 * * 1-5"
echo "test" | devly base64 | devly hash

Full tool list: https://devly.techfixpro.net/docs/tools/

CLI docs: https://devly.techfixpro.net/docs/cli-usage/

Requires the Devly Mac App Store app, the CLI delegates all processing to it and runs it headless.

u/Economy-Department47 — 6 hours ago
▲ 14 r/CLI+1 crossposts

CrunchyCleaner - Software cache cleanup tool for Windows &amp; Linux.

  • Cross-Platform: Works on both Windows and Linux
  • Lightweight: Single binary, no dependencies (just download and run it)
  • TUI (Text-UI): Simple, minimalist interface, no confusing menus

AI was used for this project in some parts.

https://github.com/Knuspii/CrunchyCleaner

u/Agreeable-Can3340 — 16 hours ago
▲ 6 r/CLI

Hmon - Linux resource monitor with Gpu , Docker , Cron , Web all available in Zen Mode.

https://i.redd.it/elz5r782a6tg1.gif

I recently launched the Major version of hmon. It is helpful for developers who are waiting for Docker, used ports, doing cron jobs, and running web servers. incredibly little and light binary. 1.0.0 is now available. please take a look.
Github

reddit.com
u/S-for-seeker-9526 — 15 hours ago
▲ 3 r/CLI

Unix philosophy is the correct execution model for AI agents

Something clicked for me recently and I wanted to share because I think a lot of people building with AI agents are hitting the same wall without realising what's causing it.

Your agent has a finite context window. That's its working memory, everything it sees, thinks about, and receives from tools has to fit in there. When it fills up, the agent starts forgetting things, gives worse answers, and eventually just fails the task.

Here's the thing nobody talks about with MCP: every single tool call dumps the full schema, the request, and the entire JSON response back into that working memory. One call can cost 500-2,000 tokens in overhead. The data you actually wanted? Maybe 200 tokens. Run 20-30 of those in a task and you've burned through half your context on protocol plumbing. No wonder the agent starts losing the plot halfway through.

I work at TinyFish (we build web infra for AI agents) and we had a front row seat to this. We shipped an MCP server, it worked fine on small tasks, then watched it completely fall over on anything involving more than a handful of web operations. So we shipped a CLI on the same backend. Same API, same everything. Just a different access pattern.

The difference was kind of embarrassing. 45K tokens overhead on MCP vs 3K on CLI for the same task. Completion rates went from roughly 35% to 90%+.

And it makes total sense when you think about it. Claude Code, Cursor, these tools already live in a terminal. They have bash. They have a filesystem. When your agent runs a CLI command, output goes to a file. The agent reads the file when it needs to. Pipes work. Loops work. Parallel execution works. Intermediate results never touch the context window.

It's basically just the unix philosophy - stdin, stdout, everything is a file - applied to a problem nobody expected it to solve. Your agent's context window is bounded working memory, and unix was built to keep intermediate data out of bounded memory. The fit is almost too clean.

Anyway, not saying MCP is useless. It's great for tool discovery and there are real cases for it in multi-agent setups. But if you're building tools that agents will use for actual work, especially anything involving multiple operations, seriously consider shipping a CLI alongside it.

reddit.com
u/tinys-automation26 — 24 hours ago
▲ 1 r/CLI

Do you miss Command+K when not using Cursor?

https://i.redd.it/xsf693tj84tg1.gif

I don't use cursor always, but missed its simple Command+K utility when using Ghostty, iTerm, or terminals on Linux.

I also liked UX of spotlight on macOS - which doesn't interrupt whatever I was doing before and after launching spotlight.

I combined these two ideas and made commandOK - use LLM for commands hard to remember or type. But then also go back to non-ai workflow/use on terminal.

Here's GitHub repo https://github.com/64bit/commandOK

What do you think?

reddit.com
u/gigapotential — 22 hours ago
Week