u/JhonDoe191ee

▲ 8 r/LLMStudio+3 crossposts

Hey, want a coding agent that runs directly on your PC? Keep your machine on, and control it hands-free from your phone by just chatting whit whatsapp, no more remote desktop hassle!

Hey, I built a remote agent that lets you choose your AI provider from 18 options ( OpenAI, Anthropic, Google, and more....) It runs your agent remotely with those models, so you can stay connected to your development work from your phone or your PC using whatsapp. I hope you guys like this one.

https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 4 days ago

As a vibe coder, I know how hard it is to configure an agent or AI tool and how much effort it takes asking ChatGPT how to export and save a variable in your terminal, setting up a proxy wrapper, checking if ports are occupied, then starting a debugging session with telnet and taskkill, or needing to download extra dependencies or other tools just to make the one you want actually work. I also know how painful it is to hit rate limits mid-session, then start copying your checkpoints, compacting them, analyzing where the session stopped, and so on. All of those vibe coder pains are fixed with this agent: all-in-one, every major AI provider runs in the same Claude Code environment, with no environment config, no extra dependencies, and no third-party wrappers everything is plug and play, all you need is a PC. And if you are too lazy to keep spamming “yes, accept edit”, “yes, I agree”, or to keep changing models when the current one fails, I also have your solution: you can use a dangerously-skip-permissions mode to avoid all permission prompts, and /fallback to preconfigure the model you want as a backup when the main one fails. My goal is to make vibe coders’ lives easier and make their coding experience genuinely good : https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 6 days ago

Asking for a benchmark on my agent on MiniMax

Hi, I know this will sound extraordinary, but I am asking someone with a paid plan on MiniMax to benchmark my agent on it. Let me explain why I cannot do it myself: this agent already broke me financially while benchmarking every AI provider on it, especially the subscription plans and not just pay-as-you-go (Anthropic, Gemini, Codex, GitHub, etc.). I took all of those subscriptions and ended up broke, just to benchmark my agent and test all the edge cases, all the functionalities, everything. I thought MiniMax would be like Kimi, OpenRouter, or DeepSeek and charge around 2$, which is more than enough to benchmark all tools, MCP servers, hooks, errors, and so on, but when I checked MiniMax the starting point was 25$ with no free trials, and I just cannot afford that right now. What annoys me even more is that MiniMax is OpenAI/Anthropic compatible so that make me on hard spot to route it to the best lane that gonna fit it well; so I preferred the OpenAI way because my agent has a more developed architecture for OpenAI-compatible models, but I still feel unsafe leaving it unbenchmarked, especially tools calls, agents, and cache control. What I want is someone to benchmark this agent for me on a medium MiniMax model with only 5 requests (it will not even cost you 0.01$): 1st request: “Hi”, 2nd request: “Tell me a joke”, 3rd request: “Save it in joke.txt”, 4th request: “Spawn an agent and explore this directory”, 5th request: “Test one skill”. After finishing, type “/statistics” and copy-paste the cache read and hit rate for me; that’s all, and it would mean a lot to me : https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 6 days ago
▲ 1 r/ZaiGLM

I’ve been using GLM since GLM-4.5, and that era felt legendary; back then, Z-BigModel felt genuinely exciting, but now it often seems more focused on making money than on delivering real value, and a lot of the “billions of tokens” marketing feels misleading because, in practice, it can feel more like 100K tokens at best. Still, I don’t want to judge an old friend too harshly, so I decided to build a GLM-based setup with a different approach as part of my agent; when I used OpenCode, the usage was extremely high, and even a light plan can feel closer to a free trial than a real pro plan feels liek a Go plan , while Claude Code also felt limited for this use case because it does not natively support OpenAI-compatible models the way Codex does. Before adding any provider, I study its compatibility carefully to decide whether it is actually worth integrating or whether it will just waste people’s money, and what I found is that OpenCode and Claude Code do not have OpenAI compatibility by default, so they rely on a normalization layer to connect different models to their infrastructure, which is useful but comes with a real performance trade-off. That’s why I built an OpenAI-compatible layer to connect the GLM API with the Claude Code infrastructure, giving access to features like scalable context, cross-model support, and shared memory, and you can read the README to learn more about the agent here: https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 7 days ago

Building AI Agent that make every AI provider work inside claude code environment natively

I built an AI agent called Tau that unlocks free Claude Code all functionality , it's not just a proxy wrapper like most agents on the market. Instead, Tau provides a fully integrated API endpoint layer connecting multiple AI providers to the complete Claude Code infrastructure, making them work seamlessly together in an agentic environment

https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 7 days ago

Hey, If you’re facing issues with GitHub repository management, I’ve automated it into a single command. I grouped the most important GitHub tools and commands into an agent that automates my workflow, so I no longer have to worry about the proper commit protocol, code reviews, issues, pull requests, changelog files, or drafting new releases, just simple commands make the agent interact directly with your repository and automate your work with the agent that support 14 AI provider (GitHub included ) so you can chose the intelligent level based on your need.

https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 7 days ago

Since GPT‑4 was released, I’ve been an OpenAI MVP and I liked everything about it, but my needs kept evolving along with the GPT models, and over time Codex became my main tool: even though it was buggy at first, the maintenance and constant fixes made me feel safe and increased my trust in it, while at the same time new competitors appeared with fresh features and technologies like hooks, plugins, and MCP servers, Skills ,which I initially resisted, but Codex eventually stopped being enough for me to keep up with this evolution, so moving to Claude Code honestly felt like a divorce, especially as someone who had been an OpenAI MVP since the early days, yet in the last few months Claude models became ridiculously expensive and hard to justify for my everyday workflow, which forced me to return to my roots and use Codex again because for me everything is a cost‑to‑performance question and I simply cannot afford Claude max plan for average daily tasks and school projects, so Codex was my savior, but I still felt like I had sacrificed the Claude Code environment, almost like going through a second divorce, until around March 31 when the leak came out and I asked myself why not build a single agent that combines the Claude Code environment and infrastructure with the performance of Codex and make them work together, and after studying some open‑source projects like a Rust‑based Codex CLI, I learned how the TOS, request/response formats, tools, and schemas are defined at a high level and how the links are established at a lower infrastructure level, which allowed me to make it work, and now it would be an honor for me to receive feedback from this community, answer any questions, and accept any form of constructive critique.

https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 8 days ago

My favorite Claude era will always be the 3.5 era.

Back then, there were no extra skills, no HTML file generation, no asking for a simple report and getting a 1k-line JS file instead. It was just pure quality, intelligence, and free usage. That era felt simple, powerful, and truly special.

What do you think was Claude prime era?

u/JhonDoe191ee — 8 days ago

320 cache reads for 150k input. This is an extra cost that is not necessary ,every time I paid for a full cold write to the cache with no read.

u/JhonDoe191ee — 9 days ago
▲ 22 r/kiroIDE+6 crossposts

Everyone's facing insane costs and rate limits from Claude Code it's gotten ridiculous these last few months. I needed a better alternative to save my money, so I found Cline, started clining , and it was amazing. But I kept thinking: imagine bringing Cline as a provider into Claude Code mature environment to test... and it rocked I combined Cline cost/performance models with Claude Code ecosystem into one product, handling cache control, API calls, ToS schemas, and building req/response to fit perfectly. Now I've got 13 providers and Cline one of my faves you guys gotta try it: https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 7 days ago
▲ 1 r/learnAIAgents+1 crossposts

Everyone knows Claude Code is the best agentic coding environment out there, but the pricing and rate limits hit hard. Tau fixes that it extends the full Claude Code infrastructure natively to 13 providers, no proxy wrappers. While tools like OpenCode and Pi normalize everything into one schema (and lose 70% of performance doing it), Tau lets each model run in its own lane while sharing the same tools, memory, and session state. That means model switching mid-session, scalable context windows, and seamless fallbacks all without dropping your work. Check the repo and let me know what you think.

https://github.com/AbdoKnbGit/tau

u/JhonDoe191ee — 9 days ago