r/WTFisAI

Anthropic just blocked every third-party agent tool from Claude subscriptions, and the cost jump is brutal
▲ 15 r/WTFisAI

Anthropic just blocked every third-party agent tool from Claude subscriptions, and the cost jump is brutal

Anthropic blocked all third-party agent tools from Claude subscriptions on Saturday. Boris Cherny posted on X on Friday, 24 hours before the switch flipped. OpenClaw is the biggest name affected, with over 135,000 active instances running through Claude Pro and Max plans, but the ban covers every third-party harness that isn't Claude Code.

The technical argument: third-party harnesses bypass Anthropic's prompt caching layer, which means each session eats dramatically more compute than the same workload through Claude Code. Cherny submitted PRs to OpenClaw to improve its cache hit rates around the time of the announcement, more of a goodwill gesture alongside the ban than a failed compromise that led to it.

This didn't come out of nowhere either. Anthropic tightened the ToS language back in February, explicitly restricting OAuth tokens to Claude Code and Claude.ai. After that, third-party tools like OpenCode started voluntarily removing support for Claude subscription keys. Saturday's cutoff was the final step in a two-month tightening, not a sudden flip.

Max plan is $200/month, under the new API billing, extreme 24/7 agent sessions can theoretically hit $1,000 to $5,000 per day, though most normal workflows will land well below that. Anthropic is offering a one-time credit equal to the monthly plan cost, up to 30% off usage bundles, and full refunds for subscribers who want out.

Claude Code stays included in subscriptions with zero restrictions. The caching efficiency argument has real technical merit, but the practical outcome is the same: use Anthropic's tool for free or pay API rates for anything else.

Dev community is split, half say these tools were always violating ToS and Anthropic was generous to look the other way this long. The other half point out Claude Code itself automates through loops and MCPs that eat tokens the same way, making this about which tool accesses Claude rather than how much compute gets used.

What's your move if you're running agents on Claude?

u/DigiHold — 20 hours ago

You can now video chat with your AI agent, face and voice, in real time

Pika just released a video chat skill that works with any AI agent, and it's the first time I've seen this done in a way that actually feels like a conversation. Your agent gets a face, a voice, and it talks back to you in real time while maintaining its full memory and personality.

It works with any agent, not just their own. Pika released it as an open source skill on GitHub, so if you're building with Claude Code or similar tools you can plug it in. And if you use it with their Pika AI Self, the agent can actually do things during the conversation, not just talk but execute tasks while you're chatting with it face to face.

We've been interacting with AI through text boxes, and it's always felt like passing notes back and forth. This is more like actually sitting across from someone. The agent remembers your previous conversations, keeps its personality consistent, and adapts in real time to what you're saying. It's still in beta and you can tell from the demos that it's not perfect yet, but the foundation is clearly there.

I think this changes how non-technical people will eventually interact with AI agents. Not everyone wants to type prompts into a terminal, but most people would be comfortable video calling an assistant that walks them through something with a friendly face. That's a completely different product than what we've been building so far.

Check the video: https://x.com/pika_labs/status/2039804583862796345

Has anyone tried it yet? I'm curious how the latency feels in practice because 1.5 seconds sounds fine on paper but conversation flow is brutal to get right.

u/DigiHold — 17 hours ago
Week