r/codereview

▲ 4 r/codereview+2 crossposts

We vibe code. We speak things into existence. But sometimes the robot gets a little too excited and gives us a 500-line monster file, a function that does literally everything, and ten // TODO: implement this comments sprinkled around like confetti.

Got tired of negotiating. So I built a tiny ESLint plugin called eslint-plugin-ai-guardrails that just tells the AI assistant to chill out. Four rules: max file lines (300), max function lines (50), no orphan TODOs without a deadline, and no comments that just repeat the code like // set x to 5 right above const x = 5.

Now my robot and I have a better relationship. It still generates code, but the guardrails keep things tidy. One command setup: npx eslint-plugin-ai-guardrails init

npm: npmjs.com/package/eslint-plugin-ai-guardrails
GitHub: github.com/isaacnewton123/eslint-plugin-ai-guardrails

Anyone else build tools just to keep the AI in check? Share your "negotiation" stories!

u/Hot-Pea4514 — 1 day ago
▲ 94 r/codereview+1 crossposts

Hey everyone. I recently picked C back up after years away from it, and to get back into the mindset I implemented SHA-256 from scratch.

One deliberate choice: no pre-generated lookup tables. Both the K constants and the initial H values are derived at runtime from the first 64 primes, following the spec. The point was to understand where the constants come from, not just copy them in.

I also added a small interactive debug shell to inspect the internal state round by round.

Not production-ready, clearly. What I'm really after is feedback on code quality: structure, clarity, and anything that could be done better. I'm getting back into C and want to improve.

Note: I'm using camelCase rather than snake_case, also for function-like macros: just a personal style preference, happy to hear thoughts on that too.

Also: if you want to compile it, be aware that I'm using a GCC built-in function, portability is on my list.

Repo: https://git.dksengine.com/dks/SHA-256

u/Stock_Hudso — 11 days ago
▲ 17 r/codereview+7 crossposts

hi everyone.

i'm new in the field of releasing fully open source projects to the world,

so shooting my shot.

i've been working on something very cool in the past few weeks,

something I had in mind for long time, but just couldn't get to solve it,

and after some long nights researching & reading about the most deep shallow parts of git, I think i managed to solve it!

the core issue - git is NOT built for ai-driven development.

- undoing work is almost impossible (/rewind is working like shit imo),

- knowing which session / context caused which change - the "why did you change it?" works only if u are in the same session (and not after a f**king /compact)

- viewing the file tree in correlation to the actual context & prompts

- forking/ branching - splitting conversation context to new conversation (branching basically).

- and much more, but you see the idea

at the moment i keep releasing new features & fixes,

I released an alpha version, still requires some work...

and i'm looking for some feedbacks and possibly some contribution.

https://github.com/regent-vcs

would love to hear what the community think

notes:

- at the moment i'm only supporting claude code

- there are 2 repositories - one for the actual cli, another for vs code extension

u/Immediate-Landscape1 — 7 days ago
▲ 1 r/codereview+1 crossposts

spent my weekend building a code review tool to avoid doing code reviews. it's called sift. open source, free, one yaml file, and it actually gets smarter the more you use it. No rights to reserve.

Check it out: https://sift-agent.com.

Full story on how this one didn't just sit on my todo app: https://medium.com/@sahilcs1111/i-built-an-ai-code-reviewer-that-runs-for-free-83488bf48338

would love feedback and contributions, especially if you break it.

reddit.com
u/sahilsaleeeem — 10 days ago
▲ 3 r/codereview+2 crossposts

Hi guys i am currently baffling between TRAE IDE Pro plan versus the Opencode Go coding plan. What i would like to know is the quota limits between the 2 plans, how far/long can i stretch them?

And i would love to some thoughts about Chutes.ai Pro plan, Warp pro plan and Augment code Pro plan if possible

u/Flwenche — 6 days ago
▲ 0 r/codereview+1 crossposts

Need help debugging an AI/ML risk analysis project (React + Node + FastAPI)

Hey everyone, I’ve been building a full-stack AI-powered risk analysis platform for my portfolio, but I discovered my ML pipeline is completely broken and I’m honestly stuck.

Tech stack:

  • React/Vite frontend
  • Node.js/Express backend
  • FastAPI ML service
  • XGBoost, RandomForest, IsolationForest

Main issue:
No matter what input I give, the app almost always returns LOW risk.

After deep debugging + CodeRabbit review, I found multiple architecture issues:

  • frontend calling wrong ML endpoints
  • silent fallback scoring overriding ML
  • disconnected ML pipelines
  • payload mismatches
  • inconsistent feature engineering
  • casing mismatches between frontend/backend
  • dummy model accidentally being used instead of real ensemble

I’m trying to properly unify the pipeline:
Frontend → Backend → FastAPI → Ensemble Model → Prediction

Would really appreciate guidance from anyone experienced with:

  • ML system design
  • FastAPI + React integration
  • fraud/risk scoring systems
  • debugging prediction pipelines

Can share repo/code if anyone’s willing to help. Thanks 😭

reddit.com
u/ThinMeasurement2195 — 5 days ago
▲ 6 r/codereview+2 crossposts

Hey, I'm a beginner and I was tired of calorie trackers wanting my email, a subscription, and cloud access. So I built one that lives entirely in a single HTML file. No dependencies, no backend, data stays in local storage. Would love any feedback!

I have attached the link for the GitHub Repository, but if you don't know how to use it you can always dm me and I'll send the html file.

Demo video

Thank you for your time.

u/Proper-Change1274 — 7 days ago
▲ 1 r/codereview+1 crossposts

I've been used Claude code to write a program for automating scheduling for my work, and I want to get an expert's opinion on it before I show it to my job. I am a beginner in all things programming, and I compare my understanding of it to a toddler's understanding of the English language - I know how to say and recognize some words, but I wouldn't be able to tell you any definitions or structure.

Is there someone I can hire to review my code? Is it safe to send that to someone online?

reddit.com
u/babushka4482 — 11 days ago
▲ 0 r/codereview+1 crossposts

I Built a code governance scanner that doesn't just fix your code it certifies it. Every repair gets a cryptographic audit trail, proof of rescan, and a signed artifact. This is what a governed repair looks like."

Didnt think it would work at first same input same output well it worked way better than i expected

A Gold snippet is a governed RC4 repair artifact.

It contains the repaired code, the original identity, the structural anchor, hashes, repair recipe, proof result, rescan result, Scan ID, Repair ID, and attestation.

The code is only one part.

The governance + proof wrapper is what makes it Gold.Where RC4 comes in:

RC4 is the repair class / standard.

It means the snippet did not just get edited. It went through the governed remediation contract:

Find it → Fix it → Prove it → Record it

\> Gold snippet = governed RC4 repair artifact

Inside it:

GOLD SNIPPET

├─ repaired code

├─ before/original code identity

├─ governed lane: Gold

├─ RC4 repair class

├─ structural anchor

├─ file / excerpt / subtree hashes

├─ rule / detector / template identity

├─ deterministic repair recipe

├─ governance decision trail

├─ proof result

├─ rescan result

├─ Scan ID

├─ Repair ID

└─ attestation / certificate

A Gold snippet is not just repaired code.

It is a governed RC4 repair artifact: repaired code with structural identity, deterministic recipe, proof, rescan result, and attestation attached.

The framed certificate in the photos is my first ever successful governed repair — Pattern #001. I printed it for $0.70 and framed it for $3. It's signed with HMAC-SHA256 and I'll never be able to generate that exact artifact again.

The gold nodes in the dependency graph are files with verified, governed repairs attached. The fan view shows a single file's proof path every connection is traceable.

When I ship, your codebase won't just be fixed. It'll be certifiable.

X185GC Framework Only While in Beta Compliance certification coming post-launch.

Oh Every repair saves a governed metadata snippet. Opt in and that snippet feeds the next repair. No AI API costs. No cloud fees. The repair library grows from the community. The more repairs, the smarter it gets.

No AI in the repair pipeline. Zero contamination. Deterministic.

u/X185plus — 11 days ago