u/AbjectBug5885

Why is every "context layer" tool lying about token savings?

I've been shipping agents for a year and a half. Lately every other launch is a "context layer" or "MCP optimizer" promising 70-90% token cuts.

I've installed five of them. Same story:

  • README chart with no methodology
  • "Benchmark code coming soon"
  • The savings only show up on the demo corpus, not on my actual Claude Code with 6 MCP servers and 140-something tools

If your tool actually cuts tokens at scale, ship the corpus, the queries, the seed, the model, the cost. Anything else is a screenshot.

I want to find one of these that works. So far receipts from zero of them. Anyone seen a benchmark that survives sniff-testing?

reddit.com
u/AbjectBug5885 — 2 days ago