
Considering that as of it is now all LLM are charged "by token" the conclusion is quite simple, everything will become more and more expensive, so we need start investigating how to limit token spending and stop complaining, because all tools will suffer the same destiny in the long run and the choice will be between using older and cheaper models (if available) or find ways to save money (ways that work on Copilot but also on other tools and that, on a different vibe, are good because they will use less energy and so will be more ecological).
Any idea here is appreciated, I've added some that I've found and tested after some investigation.
- https://github.com/juliusbrussee/caveman This is VERY stupid and almost a joke but because tokens are paid both in input and output it simply works, a KISS solution. Maybe too much because after 2-3 hours I feel the fatigue of reading this kind of language
- https://devblogs.microsoft.com/all-things-azure/i-wasted-68-minutes-a-day-re-explaining-my-code-then-i-built-auto-memory/ I've used it on codebases I constantly work on and the token saving is quite large, approx 33% less token
- https://github.com/husnainpk/SymDex for code bases you need to investigate this is another alternative, minimizing the grep and parse operations that consumes a lot of tokens. Best improvement is on velocity, results are produced much faster and are worth the time required to build the database
Please post your tools, ideas and results and stop complaining, because life is unfair and we know it, we must adapt and change.