
LLM costs and prompt leaks turned out to be bigger problems than I expected
Been working on something recently and wanted a sanity check from people here.
While building with LLM APIs, I kept running into two things:
- costs getting kind of unpredictable depending on which model/provider was used
- people pasting sensitive stuff into prompts without really thinking about it
So I started putting a thin layer in front of the requests to catch obvious sensitive data before it leaves and route requests to cheaper/faster models when possible
Nothing too fancy, just trying to solve the same issues I kept hitting. https://opensourceaihub.ai/
![[R] Strongest evidence that academic research in ML has completely ran out of ideas](https://external-preview.redd.it/oc3VqNfM8J1mveMzUkvw93wbEp-muAKYRN0eoAJBUH4.png?width=140&height=140&crop=1:1,smart&auto=webp&s=724662ffcf6787b868d8006a5e653fe362617e8e)

![[P] I built an AI framework with a real nervous system (17 biological principles) instead of an orchestrator — inspired by a 1999 book about how geniuses think](https://external-preview.redd.it/GltjhMRmSSR5YdHQqEk0UU9hMd_7ZzdG10AMfMTw6ZM.png?width=1080&crop=smart&auto=webp&s=7342a271f9503b9da8d444bd157e2f37413f634d)

