u/WinterSpecial7970

▲ 1 r/SaaS

I am an AI Engineer working in this space from past 9 years. The way everyone rushed to create AI applications, I realized one thing. Very few people really thinks about potential vulnerability in there code base. This results in piling up of technical debt.

On top of it most of the existing SAST tools are not designed to capture GenAI / Agentic Logic vulnerabilities. Existing scanners either miss prompt injections entirely, or they flag every single string formatting operation, which makes the alerts useless.

I wanted a tool that actually understands the intent of the data flow.This was the problem statement I started working on it. Lately after hearing so many layoff it put the fuel to fire as well

So, I spent the last 3 months planning ,designing & building RepoInspect.

However I am a builder, an engineer but very bad in marketing and moving a product to profitability.

Anyway let's get back to solution. Repoinspect is a two-pass hybrid engine. It uses a deterministic AST taint tracker to find potential hotspots, then hands the attack path to an autonomous AI agent to verify if the injection is actually exploitable.

End Result: To test it, I pointed it at some of the most popular AI frameworks. Got multiple bugs in those. Attaching the detailed results on github.

The Launch Struggle: I tried to launch on Hacker News yesterday. Because my account is new, I got flagged almost immediately. It was a huge punch in the gut after a month. Same thing happened with most of the reddits accounts. Honestly speaking I have never been to these sites and really doesn't know the rules and regulations around it. I just want my solution to be atleast given chance and heard by AI folks.

But instead of giving up, I spent this weekend adding what the community might like : Local LLM support  so teams can run audits without their code ever leaving their machine.

I've open-sourced the engine and all the forensic reports. I’d love to hear from other founders who have built developer-focused security tools. How do you find your first "Real" users when the automated filters are so aggressive?

GitHub: https://github.com/ritesh-ui/RepoInspect

u/WinterSpecial7970 — 11 days ago

Apparently along with 10th,12th , Graduation , Service Letter they are asking for permissions into our ITR, Bank Statement, EPFO website ?

Is there any law enforcement in this country to stop this shit happening. Murder of our digital privacy and forcing us to be digitally naked.

Highly disappointing and disturbing. Never seen such shit happening in last 9 years of my corporate experience

reddit.com
u/WinterSpecial7970 — 13 days ago
▲ 8 r/aisecurity+1 crossposts

Hey everyone,

I’ve been working on a project to solve a major problem in AI security: Traditional SAST tools (Snyk, SonarQube, etc.) are blind to "Agentic Logic" bugs. They look for bad strings, but they don't understand how user data can hijack an LLM’s instructions.

I built a deterministic engine called RepoInspect that merges AST-aware taint tracking with autonomous AI agents. To test it, I ran it against LangChain, and it flagged 10 high-severity vulnerabilities that had been missed by standard tools.

The most common issue: Instruction Hijacking (LLM01) In several built-in chains (like the LLMMathChain), user input is interpolated directly into a prompt template that tells the model to generate executable Python code (for numexpr).

The Attack Vector: Because the user {input} isn't delimited (no XML tags, no isolation), an attacker can simply "ask" the model to generate malicious system commands instead of a math expression. Since the chain executes that code immediately, it’s a direct path to code execution via a prompt.

Key Findings in the Audit:

  • Prompt Injection: 10+ cases in agents (Self-Ask, JSON Chat) and chains.
  • Excessive Agency: Critical risks in utility wrappers exposing API keys.
  • Insecure Deserialization: Risks in how some vector store adapters handle metadata.

Why I’m sharing this: I’ve open-sourced the engine and the full forensic reports for LangChain, OpenAI, and Dify. I want to help developers move beyond "hope-based security" for their RAG and Agentic pipelines.

I'm curious to hear from other researchers—besides XML delimiters and system message isolation, what "hard" defenses are you using to protect your agents from hijacking?Adding github repo in the comments.

reddit.com
u/WinterSpecial7970 — 11 days ago