r/devtools

Do SaaS founders care whether AI agents can read their docs?

I’m working on a tiny tool for SaaS/API companies.

Problem: users increasingly ask Cursor/Claude/ChatGPT to integrate with a product, but the agent often misreads docs, misses prerequisites, or grabs outdated pages.

The tool generates:

  • /llms.txt
  • /llms-full.txt
  • clean Markdown docs bundle
  • basic docs quality report

Question for SaaS founders: Would you pay a small one-time fee, say $3–$9, to generate this for your docs, or is this something you’d expect your docs platform to provide for free?

Happy to run a few docs sites manually and share the output.

reddit.com
u/dawksh — 3 hours ago
▲ 3 r/devtools+1 crossposts

I got tired of pasting sensitive JSON and AES keys into random online tools, so I built a 100% client-side suite of 77+ developer utilities.

Hey everyone,

Like most of you, I constantly need to format JSON, decode JWTs, check diffs, or encrypt strings during my day-to-day work. But pasting proprietary code or sensitive keys into random, ad-heavy websites always felt like a massive security risk.

I couldn't find a comprehensive, privacy-first solution, so I built CipherKit (cipherkit.app).

It’s a suite of 77+ developer and cryptography tools designed entirely around privacy.

The core focus:

  • 100% Client-Side: Everything runs locally in your browser. I built it using vanilla JavaScript, HTML, and CSS to ensure there is no server-side processing. Your data never leaves your device.
  • No Login & Free: No paywalls, no accounts required.
  • Clean UI: Dark mode by default, built for fast keyboard navigation.

Some of the tools included:

  • Crypto Hub: AES Encryption/Decryption, RSA, SHA Hash Generators.
  • Encoding Hub: JWT Debugger, Base64, URL Encoding.
  • Dev Hub: Interactive Text Diff & Merge, JSON Formatter & Validator.
  • Converter Hub: CSV to JSON, DOCX to HTML.

I’m Jana, and I built this to scratch my own itch, but I’m hoping it helps some of you keep your workflows a bit more secure.

I'd love to hear your feedback on the UI, any bugs you find, or tools you think I should add next.

Link: cipherkit.app

u/jana_0102 — 5 hours ago
🔥 Hot ▲ 108 r/devtools+3 crossposts

(Open Source) I built a second brain app where AI agents help you think — but you review every change before it happens

Most second brain apps stop at storage. You capture a note, tag it, link it, and hope you find it again someday. NeverWrite is built around the idea that your second brain should actually help you think, not just hold your thoughts. It's a local-first desktop app for macOS and Windows where your notes are plain Markdown files on your machine. No cloud sync, no account required, no telemetry. Your vault is yours.

The part I'm most excited about is the AI layer. NeverWrite supports agents powered by Claude, Codex, Gemini and Kilo, that work directly inside your vault. You can ask an agent to help you synthesize notes on a topic, find connections you missed, or draft a new note from your existing material. The key thing is that agents propose edits and you review them before anything changes, it has inline review hunks like modern code editors. The AI helps you process and connect your knowledge; it never rewrites your vault behind your back. That felt like the only honest way to build this.

If you've been frustrated by second brain tools that are great at capture but useless at synthesis, or by AI tools that feel like a black box you can't trust, NeverWrite is trying to solve both at once. Happy to answer any questions about how the agent review flow works or anything else.

Also, is open source ;)

https://neverwrite.app/

https://github.com/jsgrrchg/NeverWrite

Have fun with your vaults!

u/jsgrrchg — 2 days ago
▲ 3 r/devtools+1 crossposts

From debugging on prod to actually fixing errors

I was never a fan of debugging production errors on axiom, sentry, or pretty much any other tool existent already, simple or not simple to setup, once i start receiving errors finding them in a multitude of dashboards, learning proprietary languages, writing sqli-ish queries to find logs just seems too much, i shall have it there, visible, yelling at me that there is something wrong and i have to fix it.

I started building this tool, Loguro, a few months back, put a lot of features on it, the principle was simple, be fast, reliable, no context switching and human queries, at least at the beginning.

Ingestion, Rust + Hono on top of bun as web server, multiple layers of compression, made sure no log is escaping and there is no hiccup, tested and tested and tested again and again, it was fast, 45k+ req/s in single and 500+k logs/s in batch, instant visibility in dashboard.

Dashboard, also had to be fast, like really fast, started pushing logs, milion after milion, 50M logs queried in under 500ms.

Filter bar, human syntax, very human, level:error message:"oops something broke" from:"yesterday at 10:05" to:"yesterday at 12" context.user_id:1009 this can go on, there are a lot, a lot of queries available, but... i couldn't stop there, i like human syntax, but i like also commands, i like my filter bar to do things, not only request data from the backend, so, plugins appeared.

Plugins are basically a way for me to expand all the main inputs i have in the app, command bar, filter input and filter on the log page. On each input we can run commands to create integrations with jira, github, linear, etc., create a ticket on jira without switching context, sending the ticket and log details to slack directly, lots of plugins like that but there are two of them which made my life waaaay easier, tested for 2 months already with a project for a client i was having some issues.

--investigate and --share:md are two plugins which work on this exact sequence, find an error, enter the log, type --investigate and hit cmd+enter, if github integration is configured and github codebase accepted the ai will analyze the log, fetch relevant data from github, try multiple times until it founds something and spits out the issue it found + a solution and where needed suggestions to improve the logging to make it understand more next time. Once investigation is done and you know what happened, --share:md will create a shareable link with a markdown and one with a html view(in case some human needs it), i personally, take the markdown URL, go to claude and give it the link directly, let him fix, review and test then push.

Now what? Hope that will never happen again? Nope, pin that log, i want to see and watch it permanently for few days, add the pin a note in case i forget what was about and leave it there, if the issue will come again i will see it in the dashboard(i can create alerts, but i don't like my hearth rate spiking)

If for that issue i created a task in jira from the app, in case the issue appears again i have a badge on it, showing me that this happened again, clicking on it i have the full details about what happened WITHOUT LEAVING TO JIRA, i am debugging, if i switch tabs i might get lost. Every logging system has a retention period, want more pay more, mine does not work like that, you get max 120 days on the scale plan, but all plans benefit of memory. What this means? If i create a task from a log and time goes by, 1,2,10 months, my noise logs are pretty much gone, but that very log that triggered the task will still be there. FOREVER.

All my servers are monitored through Loguro, it accepts json and OTLP, thus from all the servers i have and all the clients servers i manage are sending logs to it, failed ssh attempts, fail2ban bans, success attempts, memory/cpu/network spiex, again, EVERYTHING goest to Loguro, full visibility.

That's Loguro, a logging system i built to suit my needs as a developer in the new era of AI development.

I am the only user for now. If anyone is interested seeing it here is a link.

https://logu.ro

u/Late-Potential-8812 — 24 hours ago
▲ 7 r/devtools+1 crossposts

Support engineers using Cursor: there might be a better fit

No vendor spam, but I keep seeing support and ops people adopt Cursor because their dev counterparts swear by it. It works, sort of.... but it's the wrong shape of tool for what most support work actually looks like.

Cursor is built for sitting inside a codebase for hours. Most support work isn't that. A normal day for a T2/T3 or escalation engineer is more like:

  • Tailing logs across five customer environments
  • Running ad-hoc DB queries to verify a repro
  • SSHing into a box that's misbehaving
  • Grepping traces, pivoting between tools, context-switching every few minutes

That workflow lives in a terminal, not an editor. So in comes building yaw.sh, a terminal with AI built directly in.

What that actually means:

  • AI right at the prompt. Pipe a log, paste a stack trace, ask "why is this pod crashlooping" with the output you just pulled..... without leaving the shell or alt-tabbing to a chat window.
  • Multi-provider. Claude, OpenAI, whatever. Switch per task instead of getting locked in.
  • Encrypted connection management built in. SSH keys, jump hosts, per-customer creds, one pane, encrypted, no more hunting through ~/.ssh/config or pasting things out of 1Password notes.

Where it lands vs Cursor:

Cursor is the right tool if your day is writing features. Yaw is the right tool if your day is reading logs, running queries, and SSHing into systems you've never seen before. They don't really compete. They just both have "AI" in the description, so people lump them together.

If you're a support engineer who's been using Cursor as a glorified ChatGPT wrapper, you can probably get the same value (and a lot less context-switching) from a terminal that just has the model right there.

Available on macOS and Windows. Linux on the roadmap.

u/Substantial-Bee-8186 — 3 days ago
▲ 2 r/devtools+1 crossposts

I built a clean cron expression explainer because I kept Googling it every week

Every time I needed to understand a cron expression, I'd end up on some ancient, ad-covered website. So I built a clean alternative: devutilixy.com

It explains any cron expression in plain English, shows the next 8 run times in your timezone, and validates the syntax. Runs entirely in your browser.

Also added JSON formatter, Base64 encoder, JWT decoder, timestamp converter and URL encoder — all free, no signup, nothing sent to any server.

Would love feedback on what's working and what's not."

reddit.com
u/Economy_Criticism839 — 3 days ago
▲ 9 r/devtools+5 crossposts

security teams treat staging environments like production but developers treat them like playgrounds

noticed something odd during a security audit last week

our security team had all these controls on staging - same monitoring, same access restrictions, same vulnerability scanning as prod

made sense to them because staging has real customer data for testing

but then i watched how developers actually use staging

people are constantly:
* deploying half-finished branches to test integration
* running experimental queries directly against the database
* temporarily disabling auth to debug frontend issues
* leaving debug endpoints enabled for weeks
* sharing staging credentials in slack channels

basically treating it like a sandbox where normal rules don't apply

meanwhile security is scanning it like it's fort knox and freaking out about every vulnerability

the fundamental assumption clash is wild - security assumes staging is locked down like prod, developers assume it's a safe space to break things

both perspectives make sense in isolation but they can't coexist

feels like either staging needs to be treated as genuinely production-equivalent (which means developers lose their testing playground) or security needs to accept that staging has a different risk model

but nobody wants to have that conversation because it means admitting that either security is being too paranoid or developers are being too reckless

have you seen teams actually resolve this tension?

do you treat staging security like prod, or do you have separate policies that account for how developers actually need to use it?noticed something odd during a security audit last week

our security team had all these controls on staging - same monitoring, same access restrictions, same vulnerability scanning as prod

made sense to them because staging has real customer data for testing

but then i watched how developers actually use staging

people are constantly:
* deploying half-finished branches to test integration
* running experimental queries directly against the database
* temporarily disabling auth to debug frontend issues
* leaving debug endpoints enabled for weeks
* sharing staging credentials in slack channels

basically treating it like a sandbox where normal rules don't apply

meanwhile security is scanning it like it's fort knox and freaking out about every vulnerability

the fundamental assumption clash is wild - security assumes staging is locked down like prod, developers assume it's a safe space to break things

both perspectives make sense in isolation but they can't coexist

feels like either staging needs to be treated as genuinely production-equivalent (which means developers lose their testing playground) or security needs to accept that staging has a different risk model

but nobody wants to have that conversation because it means admitting that either security is being too paranoid or developers are being too reckless

have you seen teams actually resolve this tension?

do you treat staging security like prod, or do you have separate policies that account for how developers actually need to use it?

reddit.com
u/Kolega_Hasan — 3 days ago

How long will our dependencies survive? Built an ML model to find out

repovital.com

Kept adopting dependencies that died later, so I built a tool to try to catch it early It's an ML model that scores GitHub repo health 0–100, using commit velocity, contributor concentration, PR merge rate, that kind of stuff. SHAP values explain each score so it's not a black box.

Best sanity check so far: facebook/create-react-app scored 58 (Watch). The model has no idea Meta officially deprecated it, it just saw 437 days since last commit, declining activity, thinning contributors, and flagged it.

Here's how some well-known repos score:

supabase/supabase                  94  Healthy
vllm-project/vllm                  96  Healthy
babel/babel                        91  Healthy
gruntjs/grunt                      90  Healthy  # still alive?
facebook/create-react-app          58  Watch
gulpjs/gulp                        42  At Risk
bower/bower                         8  Critical
trufflesuite/truffle               Archived

A few edge cases it handles well:

  • Archived repos: determine Archived status, never even touches the ML pipeline
  • Repos under 6 months old: Unscored with an explanation of not enough signal history
  • Private/missing repos: simple 404

You can score any public repo, takes about 3 seconds. There's also a README badge, every repo that adds one helps surface the tool to more devs.

Would genuinely love feedback on scores that feel wrong, that's the best signal for improving the model. Happy to dig into what the model saw for any specific repo.

reddit.com
u/tva_variant — 2 days ago

I built a free Jira app that auto-generates your daily standup using AI

It's free. I built it for myself and figured other people might find it useful too.

Wanted to share a side project I just launched. It's a Jira app that automatically writes your daily standup by reading your tickets and GitHub activity, then posts it to Slack or Teams

Marketplace link: https://marketplace.atlassian.com/apps/542311656/auto-standup-bot

Quick demo: https://www.youtube.com/watch?v=ES-rX_oDP_c (doesn't show automation but it shows the setup)

Would love any and all feedback or suggestions. Happy to answer questions about the build too (it's built on Atlassian Forge with React + TypeScript).

reddit.com
u/Abject-Cockroach-533 — 4 days ago

I built a terminal tool that records UI flows and turns them into test suites

Hey folks,
I’ve always found UI testing painful, either you write brittle scripts or rely on tools that don’t really match how you use your app.

I built an open-source CLI tool where you can record your UI interactions and it generates comprehensive test suites automatically

It tries to cover: edge cases, different input variations, navigation paths, and failures you didn’t explicitly think of. Still early, but it’s already catching cases I’d normally miss.

I’m launching it on Product Hunt today and would genuinely love feedback from people here, especially if you’ve worked with Playwright/Cypress.

PH link: https://www.producthunt.com/products/kushoai/launches/kushoai-for-playwright

Happy to answer anything / take brutal feedback 😊

u/Cultural_Piece7076 — 2 days ago

stopped tabbing out of my terminal to ask questions and honestly cannot go back

used to have my terminal on one side and a chat window on the other. constant switching. broke my focus every single time.

started using a terminal that has chat built right into it. same window, same session, no switching. ask something, get an answer, keep going.

sounds small but it genuinely changed how i work. way less friction. errors get fixed right where they happen instead of me copying them into a separate window.

if you spend most of your day in the terminal it is worth trying. yaw.sh is the one i landed on, free to use.

u/CodinDev — 16 hours ago
▲ 10 r/devtools+4 crossposts

Chronicle: Open source SwiftUI app for managing Claude Code session history

Built a native macOS app with SwiftUI for browsing Claude Code session history.

Uses GRDB.swift with FTS5 for full-text search across hundreds of JSONL session files. The architecture is pretty simple:

- SessionManager handles async file watching and indexing

- SearchView uses Combine for debounced search

- SessionDetailView renders conversation turns with syntax highlighting

Key features:

- Instant full-text search

- One-click session continuation in Terminal

- Pin and tag sessions

- Timeline view

Everything runs locally, no cloud required.

Open source (MIT): https://github.com/JosephYaduvanshi/claude-history-manager

Happy to answer questions about the SwiftUI patterns used.

u/joseph_yaduvanshi — 6 days ago

I built a tool that turns a Sentry URL into a failing pytest. Want honest feedback on whether this is useful

I was working on backend the other day and kept running into the same thing. Every time a production bug hit, I'd spend 30-45 minutes doing the same loop - read the Sentry trace, manually reconstruct the state, write a pytest, run it, realize I got the inputs slightly wrong, fix the test, run it again. By the time I had a reproducing test I'd burned nearly as much time on the repro as on the actual fix.

So I started building something to automate it.

The idea is that you paste a Sentry issue URL, it pulls the stack trace and frame locals, synthesizes a failing pytest that reproduces the exact crash, runs it in a Docker sandbox against your current branch, tells you still reproduces or your branch fixed it.

The part I think actually matters is the frame locals. It captures the exact production state at the crash frame and replays it. So the test is asserting against what actually broke in prod, not a guess at what might break. Works with any Python traceback too, Sentry is just the cleanest input.

Before I go further with this, two honest questions:

  1. Do you actually write a local repro test before fixing a production bug, or do you read the trace, understand it, fix it, and deploy?
  2. If this worked reliably and saved you that 30-45 minutes, would you pay for it or is this only useful if it's free?

Just trying to figure out if I'm solving a real problem or one I invented for myself. If this matches something you deal with, I'd genuinely like to hear how you currently handle it.

reddit.com
u/sszz01 — 5 days ago
▲ 3 r/devtools+2 crossposts

Get your AI writing clean business logic, drop token usage dramatically, get trivial PRs and easily readable (beta testers wanted)

Hey r/devtools,

We’ve built Graftcode — a lightweight runtime that lets you create an architecture optimized for AI-assisted development.

You write only clean business logic. No controllers, no DTOs, no code dedicated to specific integration methods, no Proto, no Thrift, no client — just pure public methods that can be called and consumed directly.

This changes how AI works with your codebase:

Dramatically lower token usage — AI stops wasting tokens on boilerplate and infrastructure

Much better focus — models stay on real business logic and produce cleaner, more correct code

PRs become trivial and highly readable — code reviews turn from painful diffs into short, obvious changes anyone can understand in seconds

The same pure methods are automatically exposed as MCP tools for Claude, ChatGPT, Cursor and other AI agents — with zero extra code.

Additional high-level capabilities:

• Start as a modular monolith and evolve to microservices later without changing any application code

• Run Python + .NET + Go + Node.js modules together in one process like a single unified system

• Expose your business API as a simple, always-up-to-date package anyone can consume in seconds

• Stay fully decoupled from communication layers — swap protocols, queues or clouds anytime with zero code changes

We’re currently opening a small closed beta.

If you want to try it and share feedback, join at: academy.graftcode.com

Happy to answer any questions in the comments or jump on a quick call or join our discord.

Does this solve any pain points you’re currently hitting with AI in your development workflow?

reddit.com
u/pladynski — 5 days ago

Built an open-source tool to run and document commands in one place

I built a small open-source terminal plugin called Prompty while working on my own workflow.

The idea came from a simple problem — I often run commands during setup or debugging, but later I forget:

  • what commands I ran
  • in what order
  • what actually worked

So I tried a different approach:

  • Write commands on the left
  • Execute them directly
  • See output on the right
  • Keep everything saved for future reference

Even after closing the terminal, the commands and steps stay saved, so I can revisit them later.

More broadly, I’m trying to keep everything related to a project in one place — that’s why I built DevScribe:

  • LLD / HLD documentation
  • Executable APIs
  • Database queries
  • Diagrams (draw.io, Mermaid, Excalidraw, etc.)
  • Code snippets
  • Terminal commands and setup steps

Download: https://devscribe.app/

Note: You need to install the Promptly Plugin in Devscribe editor, If you face any issue DM me

u/Limp_Celery_5220 — 7 days ago

GitHub trending tracker built for contributors. Shows open-issue counts alongside growth so you can find projects you can actually help with

The workflow this solves: I want to contribute to open source, I check GitHub trending, I see what's popular, but I have no idea which of those repos has a contributor-friendly issue queue. So I open tabs, drill into Issues, scan for help-wanted labels, get tired, close everything.

This tool shows both axes in one view. Top 360 repos in AI/ML and SWE, sorted by stars / forks / 24h growth / momentum. Each row pulls live open-issue counts from GitHub split into features, bugs, and enhancements.

The pattern that emerges when you put both axes together:

  • Megaprojects (Linux, React, transformers) are popular but have tight issue queues. Hard to break in.
  • Stagnant repos have lots of open issues but no momentum. Your PR sits forever.
  • Mid-size rising repos with healthy issue counts are the actual contributor sweet spot. Visible work, responsive maintainers, real entry points.

This tool makes that third category easy to find.

A few examples from today's data:

  • openclaw: AI assistant repo, +572 stars in 24h, 913 open enhancements
  • everything-claude-code: agent harness, +1.1k stars in 24h, 145 open enhancements
  • ollama: +75 stars, 28 open issues, very active maintainer team

Project link is in the comments below 👇

Built by NEO AI Engineer. Posting here because the contributor-flow angle felt like a fit for this subreddit.

u/gvij — 8 days ago

Managing multiple AI agents in the terminal is painful. Built a UI with agent awareness

If you're running multiple AI coding agents in parallel, you probably hit this:

they’re all just terminal processes with zero visibility.

You end up constantly context-switching to check:

- is this one stuck?

- is it waiting for input?

- did it finish already?

I built a tool to make this manageable.

Conceptually it's:

tmux + basic agent awareness + lightweight IDE features

Key parts:

- auto-detection of common agents (Claude Code, Aider, Codex, Gemini)

- runtime state tracking (running / waiting / idle)

- notifications when input is needed

- multi-pane + tabbed workflows

- works with local models (Ollama) and remote APIs

No cloud, no lock-in, just orchestration.

Curious how others here are handling multi-agent workflows today.

https://github.com/sstraus/tuicommander

reddit.com
u/Legal-Tie-2121 — 8 days ago

I built an efficient Playwright library that resolves elements from plain English instructions and caches the results

I'm the author. Here's how this started.

I was using Claude Code to generate E2E Playwright tests for a project. It worked, the tests ran green, but I couldn't really trust them. Each test case needed me to manually open the browser and verify it was actually doing what I intended, which kind of defeated the point.

I started thinking: what if tests were written in plain English so I could read them and know they're correct without running them? But fully natural language tests felt like a different problem. Too unpredictable, hard to assert against, not worth the instability tradeoff.

So I looked for something in the middle: keep Playwright's execution model, replace just the selectors with plain English. I found ZeroStep and auto-playwright, both abandoned, slow, and expensive to run in CI. There is also Midscene.js, which is active but relies on the full DOM combined with visual context, which adds latency and cost at scale.

So I built Qortest (https://github.com/vikas-t/qortest).

const t = qor(page);
await t.act("Click the submit button");
await t.act("Type <hello@example.com> in the email field");
const count = await t.query("How many items are in the cart?");

Under the hood: aria snapshot of the page (much smaller than the full DOM or a screenshot), LLM returns a structured locator like { role: "button", name: "Submit" }, Playwright executes it. Deterministic. No screenshots, no free-form JS generation.

The slow/expensive problem: I cache the resolved selector keyed by browser + URL + instruction. Subsequent runs replay the cache, zero tokens. Fingerprint-based invalidation handles page structure changes.

Numbers from a 25-test suite on the-internet, gpt-4.1-mini, 3 workers:

Mode Time LLM calls Cost
Cold (no cache) ~1.5 min 51 ~$0.13
Warm (cached) ~57s ~5 ~$0.007
Raw Playwright ~49s 0 $0

Warm is about 15% slower than raw Playwright. Thats the honest tradeoff.

A few other things worth mentioning:

  • Drops into existing Playwright tests. One import, no new runner.
  • Supports Chromium and Firefox.
  • BYOK, any OpenAI-compatible endpoint.
  • Configurable fallback model: if the primary model fails to resolve an element, it retries with a stronger one automatically.
  • Ships a reporter that shows per-test LLM calls, cache hits, and cost, so you know exactly what you're spending and why.

Still in progress: vision fallback for icon-only UI with no accessible name, and WebKit is untested.

MIT licensed. Happy to answer questions.

GitHub: https://github.com/vikas-t/qortest

u/Wooden-Profile4507 — 5 days ago

Spent way too long tab-switching to convert epoch timestamps while debugging. Made a tool that does all of them at once.

Not sure if this is just me but debugging API responses with multiple timestamp fields has always been annoying.

You see 1714000000 in a JSON payload and have to:

  1. Open new tab
  2. Search epoch converter
  3. Paste the value
  4. Note the date
  5. Go back
  6. Do it again for last_login, updated_at, expires_at...

I finally got fed up and built JSON Epoch Converter — you paste your raw JSON and it replaces every epoch field with a human-readable date in one click.

The thing I couldn't find in existing tools was support for mixed precisions in the same payload. Real-world JSON often has one field in seconds and another in milliseconds. Most converters assume one or the other. This auto-detects each field independently.

jsonepochconverter.org

Free, nothing to install, works in the browser. Let me know if you run into issues or want features added.

reddit.com
u/Automatic_Rub_4867 — 8 days ago

Gitember 3.2 Git GUI client

I've been building Gitember since 2016 — a free, open-source Git desktop client. It has been started as weekend experiment. And now version 3.2 is out with new features:

  • Worktrees - full UI support for creating, switching, and removing worktrees. If you juggle hotfix branches while keeping a long-running feature branch alive, this is the workflow improvement you've been waiting for.
  • 3-way merge conflict resolver - BASE / OURS / THEIRS side-by-side. Pick a side, edit inline, stage with one click. No separate merge tool to install.
  • AI-assisted writing (experimental)- explain what changed between two branches in plain language, secret leak detection ( is your GPU good enough ?)

It also covers everyday Git stuff (commit, branch, diff, etc.), but one thing I personally rely on a lot:

  • search through history including non-text formats (Office docs, DWG, PSD, etc.)
  • arbitrary file/folder comparison

The last one very useful feature in our days, when need quikly compare a lot of AI changes
Site here https://gitember.org/ 

Contributions, feedbacks, suggestions are welcome

reddit.com
u/ConfidenceUnique7377 — 4 days ago