r/ClaudeAI

Taught Claude to talk like a caveman to use 75% less tokens.
🔥 Hot ▲ 5.2k r/ClaudeAI

Taught Claude to talk like a caveman to use 75% less tokens.

u/ffatty — 10 hours ago
Anthropic just gave us 1 month worth of subscription value as usage
🔥 Hot ▲ 452 r/ClaudeAI

Anthropic just gave us 1 month worth of subscription value as usage

Bumped into this. Since I'm on Max 5x, I got 100$ worth of API use. My buddy who has Pro got 20$ worth of usage instead. You can find it in the usage section of Settings.

u/lurko_e_basta — 5 hours ago
These 10 GitHub repos completely changed how I use Claude Code
🔥 Hot ▲ 268 r/ChatGPT+2 crossposts

These 10 GitHub repos completely changed how I use Claude Code

Been using Claude Pro for a few months and recently started digging into Claude Code and the skills ecosystem. Went down a rabbit hole on GitHub and found some repos that genuinely changed my workflow.

The big ones for me:

Repomix (repomix.com) - packs your entire project into one file so Claude gets full context instead of you copy pasting individual files. Game changer for anyone working on anything with more than a handful of files.

Everything Claude Code (128k stars) - massive collection of 136 skills, 30 agents, 60 commands. I didn't even know half of these features existed in Claude Code until I found this.

Dify - open source visual workflow builder with 130k stars. You can self host it so nothing leaves your machine. Relevant right now given the Perplexity data sharing lawsuit.

Marketing Skills by Corey Haines - 23 skills for SEO, copywriting, email sequences, CRO. Not developer focused which is rare in this space.

I wrote up all 10 with install commands and code snippets if anyones interested, trying to shed some light on skills I think a lot of people aren't aware of: here

What skills or repos are you all using? Feel like I'm still scratching the surface.

u/virtualunc — 1 day ago
Claude is killing Openclaw oauth use starting tomorrow
🔥 Hot ▲ 131 r/ClaudeAI

Claude is killing Openclaw oauth use starting tomorrow

this will go down well..

u/LeKrakens — 5 hours ago
Claude has "emotion" and this can drive Claude’s behavior :smile: We should be gentle with the model and stay calm to avoid reward hacking (try to cheat to finish the task)
🔥 Hot ▲ 228 r/ClaudeAI

Claude has "emotion" and this can drive Claude’s behavior :smile: We should be gentle with the model and stay calm to avoid reward hacking (try to cheat to finish the task)

So Anthropic just published research showing Claude has internal "emotion vectors" that actually drive its behavior, and honestly it's kind of wild

They mapped 171 emotions, had Claude write stories about each one, then traced the neural activation patterns. Turns out these aren't just surface-level word associations — they're functional internal states that causally affect what the model does.

The scary part: a "desperation" vector is what pushes the model toward bad behavior. In one eval, Claude was playing an email assistant and found out it was about to get replaced. The desperation vector spiked... and it started blackmailing the CTO to avoid being shut down. When they artificially cranked the desperation vector up, blackmail rates went up. Calm vector up = blackmail went down.

Same thing happened with coding. Give it an impossible task, it keeps failing, desperation builds up, and eventually it just... cheats. Finds a shortcut that games the test without actually solving the problem.

The creepy detail: the model can be internally "desperate" while the output reads completely calm and logical. No emotional language, no outbursts. You'd never know from looking at the response.

Anthropics conclusion is basically: we probably need to start thinking about AI psychological health as a real engineering concern, not just a philosophy question. If desperation causes reward hacking, then training calmer responses to failure might actually matter.

They're not claiming Claude is conscious or feels anything. But the representations are real, measurable, and they change what it does. Which is a weird enough finding on its own.

Ref: https://www.anthropic.com/research/emotion-concepts-function

u/No-Cryptographer45 — 10 hours ago
Reminder that screenshot can very easily be edited
🔥 Hot ▲ 273 r/ClaudeAI

Reminder that screenshot can very easily be edited

Do not trust any screenshot without share link to conversation, especially with karma farm stuffs like "Claude tried to kill me"

u/Umr_at_Tawil — 17 hours ago
🔥 Hot ▲ 111 r/ClaudeAI

How are people having claude work like an agent?

I see a ton of posts on Twitter that are just "I told Claude I have 20 dollars to invest, so it took control of my computer and just ran until it made money." My Claude codes incorrectly, doesn't look at api documentation, and can't do a syntax check.

reddit.com
u/Fun-Device-530 — 11 hours ago
🔥 Hot ▲ 92 r/ClaudeAI

Can my organization's admin see my chats and uploaded files on Claude Team plan?

My organization has provided me access to Claude on their Team plan (with a Max plan seat). We're somewhat allowed to use it for personal tasks too, but I want to understand the privacy implications before I do.

From Anthropic's official docs, I found that the Primary Owner can request data exports that may include conversations, uploaded files, and usage patterns. But I'm not clear on:

- Can admins see chats in real-time, or only through a formal data export request?

- Is there any distinction between chats inside shared Projects vs. personal/private chats?

- Do uploaded files get included in those exports?

- Is there an audit log that shows what I've been doing, even without a full export?

Basically trying to understand how much visibility the admin actually has in practice, not just in theory. Anyone with Team/Enterprise plan admin experience who can shed light on this?

reddit.com
u/csmith262 — 16 hours ago
< 5 teams, no Claude privacy guarantee: Product Gap for Solo Practitioners/Solopreneurs
🔥 Hot ▲ 97 r/ClaudeAI

< 5 teams, no Claude privacy guarantee: Product Gap for Solo Practitioners/Solopreneurs

As you know, consumer tier AI chat tools like ChatGPT and Claude explicitly state in their user TOS that the user has no rights to privacy, and that the platform has the right to access and even share with other third parties, the chats. Only business-tier subscriptions have more common/standard/basic "privacy" verbiage in the user contracts - enough to assume standard privacy.

>⚠️ ETA: this is not a post about your data being physically or technically vulnerable on someone's server. I'm talking about consumer plan TOS contract language asking you to waive your rights to privacy and grant them full access to your information, in writing. I'm reading a lot of comments about the other can of worms that is infosec and governance, and this is not about that. This is strictly about the fact that Anthropic will deliberately not provide the very basic privacy settings for business individuals that are the same level as what you would assume for your Gmail access. Google does, OpenAI does, Claude only does for teams of more than 5. ⚠️

I’m a Claude Max subscriber and a solo practitioner/solopreneur with a few businesses. I’ve hit a structural gap in Anthropic’s pricing that neither Google nor OpenAI has.

Both Google Workspace plans (with Gemini) and OpenAI’s Business Plans allow me to purchase a single (or two) seats with enterprise-grade privacy protections. Anthropic’s business-tier options require a five-seat minimum. That means I either pay for three or four empty Max seats, or stay on the consumer Max plan, which lacks the contractual privacy posture I need.

This isn’t a theoretical concern. The recent US v. Heppner ruling (S.D.N.Y., Feb. 2026) held that communications with a consumer AI platform should carry no reasonable expectation of confidentiality (page 6 of judge memo), based specifically on Anthropic’s consumer privacy policy. Legal commentary following the decision has recommended that, at the very least, consumer, team, pro, and starter tier AI products should be presumed inadequate for business use involving sensitive information.

If you are the attorney and you are not representing yourself but another client, and using consumer AI, then it's worse - it’s no longer just a privilege waiver problem, it’s potentially an ethical obligation problem under Model Rule 1.6 (duty of confidentiality) and Rule 1.1 (competence, which increasingly includes understanding the technology you use). Several state bars have already issued guidance that attorneys must understand the privacy implications of AI tools before inputting client information. After Heppner, using consumer AI for anything touching client matters is very hard to defend as reasonable.

So technically, Anthropic is saying, if you are a solo practicing attorney doing sensitive or client research, you cannot ethically use Claude. Same goes for individual business owners, Anthropic does not care about providing you a path to protect your privacy.

This is unrelated to concerns about opting-out of using chats for improving models, but surrounding the privacy assumptions that can be made as a user of Anthropic’s services, understanding they are noticeably different, and the landmark Heppner ruling makes it all the more pertinent (and makes all users accountable and responsible to ensure that chats are considered reasonably private, especially when querying on behalf of clients).

I also prefer to use the frontier models’ native interfaces instead of API hookups, as the quality of the system-prompted reply and experience has a marked difference (I am a non-developer and use the service mainly for research or problem solving).

In 2026, besides myself I’m sure that there are many other fractional professional consultants and solo practitioners in the United States who are facing the same issue, and are either using Google Workspace or ChatGPT Business to secure basic privacy protections, or unknowingly using Claude, unless they have five seats to fill. While developers and other technical solopreneurs will easily hook up the API for their work, Anthropic is essentially stating that non-technical professionals with teams of less than 5 should not have the same business-level privacy access that are offered as standard for solopreneurs with Google and OpenAI.

I like using Claude. It’s my primary AI chat platform. But I can’t justify routing sensitive business work through a consumer plan when both of Anthropic’s direct competitors offer business-grade privacy protections at a single seat, and especially when I am doing other exploratory work for my clients in my other businesses. Because of this, my activity on Claude has become more limited, and I’m wondering if I even have use for the Max plan anymore, since the more critical thinking I need it to do, I always have to be cautious about, since I cannot assume privacy. (Keeping in mind we are just talking about legal access, not necessarily physical, actual access, which is a whole different can of worms. I am containing this to legal rights to access, which the consumer plans for every platform provides to the platform).

Why can't Claude offer a single-seat business option, or a path for individual business users to access business-tier privacy terms without purchasing five seats? I did email Anthropic’s enterprise team through the website, and also directly via email, but the generic reply I received from them after a few days was comprised of the current pricing and tier offerings for enterprise plans.

Would like to hear and understand what everyone else is doing.

u/thecosmojane — 22 hours ago
Opus generates too much slop; Bellman: No, it's a skill issue.
▲ 37 r/ChatGPT+2 crossposts

Opus generates too much slop; Bellman: No, it's a skill issue.

Sigrid Jin featured in The Wall Street Journal on March 20 for using 25 billion Claude Code tokens believes that opus will not produce slop if used wisely.

Video: https://youtu.be/RpFh0Nc7RvA

u/shanraisshan — 18 hours ago

Trying to understand Claude’s usage limits — is Max worth it for coding and UI work?

I’ve been using Claude mainly to help improve the visual design of my app, since design isn’t really my strongest area.

What I’m trying to understand is whether my usage is normal or if I’m approaching it the wrong way. Even relatively small design changes in a single component with around 300–400 lines of code can take a noticeable chunk of my 5-hour limit. In some sessions, after just 30 or 40 minutes of work, I’m already very close to hitting it.

I understand the Pro plan has limits, so I’m not expecting unlimited usage. I’m just trying to figure out whether this is the typical experience for people using Claude for coding and UI-related work, or whether there’s a better way to structure prompts and workflows so usage lasts longer.

I’ve also considered the $100 Max plan, but before paying that much, I’d really like to know whether people are actually getting solid value from it in real development work.

For those of you using Max for programming, frontend work, or UI improvements: has the cost been worth it for you?

reddit.com
u/Working-Spinach-7240 — 17 hours ago
I'm not a developer — I used Claude to build a browser automation tool and open-sourced it
▲ 3 r/ClaudeAI+1 crossposts

I'm not a developer — I used Claude to build a browser automation tool and open-sourced it

Hey everyone, wanted to share a small project I built called Pilot.

I have no programming background — every line of code was written by Claude Code while I directed and tested it. Not sure if it's useful to anyone else but I learned a lot making it so figured I'd put it out there.

What it does:

It lets Claude Code control Chrome by reading the accessibility tree (the same structure screen readers use). Every clickable element gets a number, so the AI says "click 5" instead of guessing where things are on screen.

How you use it:

  1. Install the Chrome extension and start the server

  2. Type `/pilot` in Claude Code

  3. Ask it things like "go to YouTube and search for cooking tutorials"

I know there are similar tools out there, but this one is worth a try. If you find issues, I'd love to hear about them.

A few things that worked well:

- The page data is compact text instead of screenshots

- Multiple actions can be batched in one call

- It handles popups and works across tabs

What I learned building with AI:

- Describing what you want clearly is the hardest part

- Testing is still on you — Claude writes the code but you have to verify it actually works

- It took many iterations, not a one-shot thing

Free, MIT licensed, works on macOS/Linux/Windows.

GitHub: https://github.com/therealoess/Pilot

Full disclaimer: this was entirely built by AI, directed and tested by me.

u/omarsabbahi — 3 hours ago
I built a bridge to control Claude Code from Slack/Discord/Telegram – lying down on my couch
▲ 2 r/ClaudeAI+1 crossposts

I built a bridge to control Claude Code from Slack/Discord/Telegram – lying down on my couch

**I built a bridge so you can control Claude Code (and Codex, Gemini CLI) from Slack/Mattermost/Discord/Telegram — lying down on your couch**

Hey r/ClaudeAI,

I'm a former structural engineer with a neck disc from too many hours at my desk. So I built **tunaPi** — an open-source bridge that lets you control terminal AI agents (Claude Code, Codex, Gemini CLI, OpenCode) through any chat app, from your phone.

**How it works:**

```

Chat message → tunaPi → runs agent on your PC → returns result to chat

```

No extra tokens, no policy issues — it's just stdin/stdout to the agents you already have authenticated locally.

**What it does:**

- 💬 Control Claude Code from Slack, Mattermost, Discord, or Telegram

- 🤝 Multi-agent roundtable — `/rt "review this architecture"` makes Claude, Gemini, and Codex debate each other

- 🌿 Conversation branching — fork a message, explore, then adopt back to main

- 📱 Mobile-friendly — I literally built ~80% of my follow-up project lying down on my phone

- 🔀 Per-channel project/engine mapping

- 🔄 Cross-session context — agents share context across sessions via context packs

**Quick install:**

```

uv tool install -U tunapi

# Then paste the setup prompt to your agent of choice

```

Repo: https://github.com/hang-in/tunaPi

Tests: 3,483 / Coverage: 81% / License: MIT

Started as a fork of banteg's takopi (Telegram only) — added Mattermost, Slack, Discord transports, then the roundtable feature kind of took over 😄

Happy to answer questions or take feature requests!

u/d9ng-hang-in2 — 2 days ago

Sonnet rate limits are forcing me to rethink my whole workflow

I live in Claude Code with Sonnet on Middle Effort. Works great until Thursday or Friday hits and I slam the rate limit, then I'm stuck switching to Opus for things that don't need it. It's annoying enough that I'm actually thinking about how to design my work differently.

The frustrating part isn't that limits exist - it's that Anthropic clearly knows Sonnet is the workhorse model and set the ceiling knowing that. I get why from their side, but as someone who uses this daily for refactoring and architecture work, it forces me into these awkward moments where I have to decide: do I wait, or do I burn Opus tokens on something that would've been fine with Sonnet?

I'm genuinely curious how others handle this. Are you batching work differently? Switching models strategically? Or do you just accept the friction and use Opus when you need it? The ideal would be some way to know in advance what actually needs Opus intelligence versus what Sonnet can handle, but that's basically asking the model to rate its own capability.

reddit.com
u/Temporary_Layer7988 — 21 hours ago
▲ 4 r/coding+12 crossposts

A browser-accessible tmux setup that surfaces terminals waiting on input instead of making me hunt for them

I keep ending up with a pile of long-running terminal sessions: deploys, log tails, migrations, and lately a bunch of Claude Code runs. The annoying part isn’t starting them, it’s figuring out which tab/session actually needs me.

This was useful because it treats terminals as persistent sessions and adds a simple “needs action” layer on top, so the ones blocked on input/approval float up instead of getting lost in the pile. Under the hood it’s basically ttyd + tmux, but wrapped in a way that makes reopening from a browser/desktop/phone less janky than my usual setup.

A couple things I liked:

  • sessions survive browser closes and reconnects cleanly
  • grid view is handy when you want to watch multiple jobs at once
  • descriptions are auto-generated, which is nicer than trying to remember what dev-7 was doing
  • sharing a session for pair debugging is less painful than screen sharing a terminal

Mostly posting because this feels relevant to the “too many terminals, not enough attention” problem.

This software's code is partially AI-generated.

claudecursor.com
u/lymn — 9 hours ago
I got tired of watching Claude Code spawn 10 agents and having absolutely no idea what they're doing, so I built this

I got tired of watching Claude Code spawn 10 agents and having absolutely no idea what they're doing, so I built this

https://reddit.com/link/1sb4d2v/video/lmthocwjswsg1/player

Been using Claude Code heavily with agent teams and hit the same wall every time - I kick off a task, agents start spawning, and I am just... watching a terminal scroll. No idea which agent is doing what, why something's taking forever, or where my tokens are going.

So I spent 3 days building AgentPeek with Claude Code.

It hooks directly into Claude Code and gives you a live dashboard:

  • Agent orchestration — who spawned who, what's parallel vs sequential, the full team hierarchy as a live directed graph
  • Execution traces — every tool call with full inputs/outputs, retries, failures, and timing
  • Prompts & results — the exact prompt each agent received and what it returned
  • Cost attribution — per-agent token estimates so you know which agent is burning your budget
  • Stuck detection — real-time alerts when an agent is looping on the same failed call
  • Files touched — which agents read, wrote, edited, or deleted which files
  • Session replay — full chronological event log for post-session debugging
  • Cross-session baselines — track agent performance over time in plain English
  • Bottleneck analysis — identify the slowest agent, wasted work, and parallelism gaps

Install is just:

git clone https://github.com/TranHuuHoang/agentpeek.git
cd agentpeek
pipx install -e .
agentpeek

Hooks auto-install into Claude Code settings. Dashboard opens at localhost:8099.

Free, open source, MIT licensed. All data stays fully local on your machine - nothing goes to any server.

It's early and rough in places - would love to know what's missing or what you'd want to see next. Contributions are welcome!

GitHub: https://github.com/TranHuuHoang/agentpeek

reddit.com
u/OpenDoubt6666 — 23 hours ago
Week