u/Awkward_Weather5721

i just built a tool. it doesn't generate profitable strategies. nothing does. if you don't have an edge, no llm is going to invent one for you.

what it does is execute on alpha you already have. you describe your idea in plain english, it writes the python, runs validation, backtests it, and connects to your broker. all in your terminal.

the part i actually care about is the validator. llms generating strategy code introduce a class of bias that standard lookahead detection misses. not future-bar references, but parameter selection contaminated by training data overlap with the backtest period. catches maybe 70% of what i'd flag manually.

it can also do the boring research, pulling data, scanning correlations, prepping a framework so you can sanity check an idea faster.

Launch website: finnyai.tech

repo: github.com/Jaiminp007/finny

would actually like to hear how people here handle validation on generated code, or if there's published work on llm-specific contamination i'm missing.

https://reddit.com/link/1t1aora/video/nk92bukkbmyg1/player

reddit.com
u/Awkward_Weather5721 — 12 days ago
▲ 9 r/highfreqtrading+7 crossposts

Data ingestion and avoiding lookahead bias is a massive headache, so I built an open-source CLI agent to automate my backtesting setup.

It takes a plain-English strategy idea, generates validated Python using your own LLM key, and runs a historical backtest.

I just added Binance support today.

My biggest challenge right now is the automated safety checks—it currently scans the AST for lookahead flaws before executing.

The tool is free and open source locally at finnyai.tech, with an optional $10/mo tier for managed hosting.

If anyone here builds automated validation for strategy code, how do you handle edge cases and LLM data hallucinations?

u/Awkward_Weather5721 — 12 days ago