u/Born-Comfortable2868

40 installs per day to 130. 34 USD per day to 130. 5 aso changes I made for my App.

40 installs per day to 130. 34 USD per day to 130. 5 aso changes I made for my App.

my app was making money but not from the App Store. it was from tiktoks I made earlier & from discord. it had Around 40 organic installs a day, 2.1% paid conversion, roughly $34 per day in revenue.

The App Store metadata I'd written at launch had never been touched. Same title, same subtitle, same screenshots, same keywords. I'd treated ASO as a one-time setup task and moved on.

I was ranking for almost nothing.

Before I started: I needed to understand what I was actually optimizing for

The most useful resource I found wasn't a paid tool. It was a free GitHub repo aso-skills. It's a set of AI agent skills built specifically for ASO - keyword research, metadata optimization, competitor analysis designed to work directly inside Cursor, Claude Code, or any agent-compatible AI assistant.

The way it works: your AI agent reads the skill, pulls real App Store data via the Appeeky API, and gives you scored, prioritized recommendations. Not generic advice actual output like "title: 7/10, here's why, here's the rewrite." I used it to run a full ASO audit on my own listing before touching a single field. The gaps it surfaced in 10 minutes would have taken me hours to find manually.

Change 1: Moved the primary keyword into the title

My original title was the app name. Clean, brandable, meaningless to the algorithm.

My primary keyword the exact phrase users type when looking for an app like mine was buried in the description. On iOS the description isn't indexed. It was doing nothing there.

The title is your primary ranking lever on iOS. Use it.

Change 2: Rewrote the subtitle from feature description to outcome statement

My original subtitle described what the app did mechanically. I changed it to what the user gets. The outcome they're buying, not the features they're operating.

it improved my open Rate.

Change 3: Redesigned the first screenshot

Your first screenshot isn't a UI preview. It's a conversion asset. The user sees it before they decide to read anything. It needs to communicate the outcome in a single glance.

I redesigned it to show the result state what the user's life looks like after using the app with a single headline overlaid that mirrored the outcome statement from my subtitle.

Impressions-to-install conversion improved 18%.

I eventually set up fastlane for this. Open source, free, and it handles screenshot generation across device sizes, metadata updates, and App Store submission from the command line. The deliver action pushes your metadata and screenshots directly to App Store Connect. The snapshot action generates localized screenshots automatically using Xcode UI tests. What used to be 45 minutes of manual work per iteration became a single command. If you're doing any serious ASO iteration testing different screenshot copy, updating keyword fields across locales fastlane is the tool that makes it sustainable.

Change 4: Found and targeted 3 long-tail keywords

ran a small Apple Search Ads campaign to mine keyword data. Search Ads shows you impression volume. I was looking for the intersection of high volume and low competition terms where the top-ranking apps were weak on relevance or had low ratings.

The aso-skills /keyword-research skill was useful here it groups keywords into primary, secondary, and long-tail clusters ranked by volume × difficulty × relevance. Running it against my category surfaced terms I hadn't considered and validated the ones I was already targeting.

Change 5: Fixed the review prompt

My rating was 3.9. Not catastrophic but not good. I had a review prompt that fired on app launch after 5 sessions. Technically functional. Completely wrong timing.

I moved the prompt to trigger after a user completed a specific positive action the moment in the app where they'd just gotten value. The moment where if you asked "are you happy right now?" the answer would be yes.

The submission side

Every metadata change, every screenshot update, every keyword field tweak requires a trip back into App Store Connect and Play Console. When you're actively optimizing testing subtitle copy, updating keyword fields per locale, refreshing screenshots you're making these changes constantly.

used Vibecodeapp for the building the app & also for submission workflow itself & it handles the app build process to store submission process and takes the manual back-and-forth out of getting builds and metadata live. For a solo developer shipping and iterating frequently, I was actively running these changes.

90 days later

  • Organic installs: 40 per day → 130 per day
  • Paid conversion: 2.1% → 2.8%
  • Daily revenue: $34 → ~$130

ASO is the only marketing channel where you pay for it once with your time and the return compounds indefinitely. Most indie developers treat it as a launch checklist and never touch it again.

u/Born-Comfortable2868 — 13 hours ago
Image 1 — Apps you can copy & Make your first $$$
Image 2 — Apps you can copy & Make your first $$$
Image 3 — Apps you can copy & Make your first $$$
▲ 12 r/AskVibecoders+1 crossposts

Apps you can copy & Make your first $$$

copy these Apps & make your first $10K MRR (with ad ideas)

1/ Umax

- Target audience: Looksmaxxers and those insecure about their looks
- Ads: Before / After effects of using the app

2/ Taller

- Target audience: Similar to umax, preys on short people insecurity
- Ads: Again, before / After effects of using the app

3/ Cal AI

- Target audience: Fitness / weight loss enthusiasts
- Ads: Showing the viral scan feature with UGC vids

u/Born-Comfortable2868 — 2 days ago
▲ 11 r/clawdbot+1 crossposts

OpenClaw Pro Tip: How to fix your claw with Tailscale + Codex

Running a self-hosted OpenClaw with automatic nightly updates is great until an update quietly breaks something. Here are two workflow for it.

Tip 1: Use Slack threads to keep context tight

I control my OpenClaw agent R2 through Slack. One change that made a real difference: threading every task instead of letting everything pile into a single channel or hitting /new each time.

Each thread keeps R2's context window focused on one job. It also makes it easy to track separate work streams without them bleeding into each other.

Tip 2: SSH via Tailscale and use Codex to fix config issues

My cron job pulled a new OpenClaw version overnight. This morning, R2 had stopped executing commands. Every attempt in Slack returned an "Exec denied" error with an "Approve once" or "Approve all" prompt. Something in the update had changed the default security behaviour.

Step 1: SSH in with Tailscale

Tailscale gives me secure access to the dedicated Mac running OpenClaw from anywhere on my network. I SSH'd in and opened the OpenClaw directory in VS Code.

Step 2: Open Codex in VS Code

With the file system live in VS Code, I opened Codex (set to high, with full local file access). It sits directly on top of the OpenClaw installation and can read and edit config files on the spot.

Step 3: Describe the problem, let it find the fix

I told Codex exactly what was happening:

>

Codex traced it immediately. The update had switched OpenClaw's host execution settings to a strict security allowlist (ask-on-miss). It modified the openclaw.json file to restore full execution privileges. I tested it in Slack and it was working again.

Codex removes the need to dig through config docs when an update changes something you didn't expect. Together they make self-hosted OpenClaw significantly easier to maintain.

reddit.com
u/Born-Comfortable2868 — 2 days ago

Making $$ with AI Marketing. Full Guide.

Code is cheap now but Distribution isn't. The builders winning aren't the ones shipping the most features they're the ones who already have an audience to ship to.

Pieter Levels runs a $3M+ revenue business with zero employees. His products could be copied; directories aren't hard to build. What can't be copied quickly is 750K+ followers and years of compounding search engine optimization authority. That's the actual moat.

The pattern that works: grow an audience of 1,000 people, ask what they need, build it in a weekend, launch to a warm crowd. Distribution first, product second.

Strategy 1: Model Context Protocol servers as distribution

Model Context Protocol servers are plugins for large language model assistants Claude, ChatGPT, others. A user asks a question, the assistant surfaces your server, your product is the answer.

One fintech example: 150+ installations in 30 days, $0 in ad spend.

Steps to start:

  • Pick the core question your product answers.
  • Build a Model Context Protocol server that returns that data (doable in 24 hours).
  • Publish to registries: Smithery, MCPT, OpenTools.

Building for Model Context Protocol right now is roughly where building for mobile was in 2010.

Strategy 2: Programmatic search engine optimization

The pattern: pick a keyword structure like "best CRMs for dentists." Pull structured data with a scraper. Build a page template in Next.js. Use a model to generate unique content per page. Scale.

The math: 10,000 pages, 30 visits each per month = 300,000 monthly visitors. At 2% conversion and $10 per conversion, that's $60,000 per month from pages built once.

Steps to start:

  • Pick one keyword pattern (product type + niche, or service + city).
  • Scrape your data set.
  • Build a template.
  • Generate real content per page, not just variable swaps.
  • Publish 100 pages as your minimum viable product, monitor indexation, scale from there.

Strategy 3: Free tool as top of funnel

Ahrefs built a free backlink checker. You get instant value. The full picture costs hundreds per month. The free tool is the entry point.

The loop: user gets value, shares their result, new users find the tool, you upsell to the paid product.

Steps to start:

  • Ask a large language model: "Here's what I'm building. Give me 10 free tool ideas that could act as top of funnel."
  • Pick one, build it, ship it.
  • Treat it like a free tool calendar, not just a content calendar.

Strategy 4: Answer engine optimization

Search engine optimization got you on Google page one. Answer engine optimization gets you cited by ChatGPT and Perplexity.

Pieter Levels reported his large language model referrals went from 4% to 20% in a single month.

Steps to start:

  • Find the top 20 questions your customer is asking.
  • Write structured, direct, citation-worthy answers for each.
  • Add schema markup and FAQ blocks.
  • Monitor which large language model assistants are citing you and adjust from there.

Answer engine optimization in 2026 is where search engine optimization was in 2010.

Strategy 5: Viral artifacts

Spotify Wrapped gets 100 million shares every December. GitHub's contribution graph makes developers brag about green squares. Duolingo's streak counter turns practice into social proof.

The question: what does your user want to brag about?

Steps to start:

  • Identify the output or milestone your user would screenshot.
  • Design the shareable artifact branded, but not dominated by the logo.
  • Add a share button that pre-fills the post.
  • Let users do the marketing.

This works in business-to-business contexts too. People share wins in Slack the same way they share on Twitter.

Strategy 6: Buy a newsletter

Building from zero takes years. An alternative: buy a 10,000-subscriber newsletter for $5,000 to $20,000. You inherit trust immediately and can plug in your product on day one.

Most small newsletter owners are making $0 to $500 per month. A $10K offer gets attention fast.

Steps to start:

  • Search your niche on Twitter or Substack.
  • Find newsletters with real engagement but no monetization.
  • Send a direct message: "Have you ever thought about selling?" A lot of them take the call.

No algorithm risk. No platform suppression. You own the channel.

Strategy 7: Large language model content repurposing

One pillar piece becomes everything: tweets, LinkedIn posts, short-form video, a newsletter edition, quote graphics, email sequences.

The workflow: record a 30-minute voice memo, transcribe it, feed the transcript into Claude with specific format instructions, schedule across platforms, repeat weekly.

This is a shots-on-net strategy. You don't need a massive following. You need consistent output. Three months in, you'll have more published content than most competitors.

reddit.com
u/Born-Comfortable2868 — 3 days ago
▲ 11 r/clawdbot+1 crossposts

OpenClaw 3.22 Turns Your Agent Into an Installable Platform. Big update

The update ships ClawHub marketplace support, a GPT-5.4 default reasoning engine, integrated research connectors, mid-task clarification, per-agent reasoning modes, and tighter security around skill installation. Each of those is worth looking at separately.

ClawHub changes the installation model

Before this, adding a new capability meant cloning a repo, sorting out dependencies, and hoping the configuration held. ClawHub replaces that with modular skill installation. You find what you need, install it, and it extends your agent environment without rebuilding anything.

The compounding effect matters. As more contributors publish skills, the ecosystem improves after your initial setup is complete. You don't have to rebuild when your automation requirements change.

GPT-5.4 as the default reasoning engine

Earlier versions occasionally lost the thread on multi-step workflows. An agent would misread an objective halfway through a research or publishing sequence and you'd have to restart from wherever it went wrong.

GPT-5.4 as the default reduces those interruptions. Workflows complete more reliably without constant supervision, and new users can get to useful outputs faster because fewer prompt adjustments are needed before things start working.

Research connectors

The research connectors address a real bottleneck: getting web information into a format agents can actually use downstream.

Time-filtered search helps agents prioritize recent results, which matters when you're tracking fast-moving developments in any technical ecosystem. Structured extraction converts pages directly into workflow-ready inputs without extra formatting steps between discovery and execution.

Better upstream inputs improve every downstream stage. That's where most of the performance gain comes from.

Mid-task clarification

This one is practical in a way the others aren't as immediately obvious. Instead of an agent continuing through a context gap and completing a task incorrectly, it pauses and asks.

The cost of assumption-based errors in multi-step pipelines is high. By the time you catch a misalignment, it's already propagated through research, planning, and output stages. Clarification checkpoints interrupt that failure mode without breaking workflow momentum.

Per-agent reasoning modes

You can now assign different reasoning strategies based on what each agent actually does. Fast, lightweight reasoning for classification and routing tasks. Deeper reasoning for structured planning and execution pipelines that need longer context.

This makes coordinated multi-agent workflows more efficient. Compute gets applied where it's needed rather than uniformly across every role in the stack.

Security

Marketplace-driven capability expansion introduces a real attack surface. The update adds plugin verification, authentication layers, and execution safeguards for dynamically installed skills.

These aren't optional as agents move toward production use. The gap between experimental tool and trusted operational platform runs directly through security infrastructure.

What this signals

OpenClaw 3.22 is moving in the direction package managers moved developer ecosystems. Capability expands through installable modules. The platform improves continuously. You stop rebuilding infrastructure that already exists elsewhere.

The agent-as-platform model isn't new as a concept, but 3.22 is the first version where the actual installation experience reflects it.

reddit.com
u/Born-Comfortable2868 — 4 days ago

3 Claude Workflows I Use Daily to Research, Script, and Brief My YouTube Editors

I run a YouTube scripting platform and manage multiple channels. These are the three Claude workflows I use for research, scripting, and editor communication.

Workflow 1: Video ideation using the Subscribr Model Context Protocol

Asking Claude for video ideas with no context produces surface-level suggestions with nothing backing them. Connecting Claude to Subscribr via its Model Context Protocol server fixes that.

I tell Claude: "Use the Subscribr Model Context Protocol to come up with video ideas for my channel like World Economics."

Claude calls the Subscribr agent, which has access to 40 million+ YouTube videos. It pulls real data: channel stats, top-performing videos with view counts, direct links, channel ID and metadata. Then it generates ideas reverse-engineered from actual outliers.

Examples it produced:

  • "The Shocking Downfall of [Brand]"
  • "The Mysterious Billionaires Who Created a $100 Billion Fashion Empire"

These are modeled after videos that already proved successful on similar channels, not guesses.

Previously this meant manually analyzing competitor channels, tracking top videos, and trying to spot patterns. Now it happens in seconds, inside one tool.

Workflow 2: Deep research for storytelling videos

Claude's deep research feature handles everything in one place, without switching tools.

I tell Claude: "I want to write a script about [topic]. Do deep research on the story."

Direction matters here. Give it the angle, not an open-ended request.

The prompt I use:

Claude pulls from multiple sources, organizes the information, and structures it into a research brief.Do deep research on this story. I want deep storytelling with the vibe that this story has never been told before. There should be many turns and revelations in the narrative you put together

Then I edit manually. I cut sections that don't fit, add my own insights, and reorganize based on what I know works. After that, I bring the edited brief into Subscribr, which connects to Claude via Model Context Protocol, and it turns the brief into a full script.

Claude compresses hours of research into minutes. The editorial judgment stays with me.

Workflow 3: B-roll annotation using Claude Skills

Claude Skills are saved prompts that Claude memorizes and applies to any content you give it. I built a B-roll annotation skill that adds specific visual directions to any script, line by line.

I paste the script into Claude and say: "Use the B-roll generator skill to annotate this script."

Real example:

Script line: "Faceless YouTube channels are exploding in 2026."

Claude's annotation:

  • B-roll: Scrolling through faceless YouTube channels in different niches
  • Animation: Split screen showing traditional YouTube studio setup (crossed out) vs. laptop with AI tools
  • Sound: Subtle tension beat to emphasize growth

Script line: "You need 4,000 watch hours to get monetized."

Claude's annotation:

  • B-roll: YouTube Studio analytics dashboard showing watch hours climbing towards 4,000
  • Animation: Counter ticking up from 0 to 3,000 watch hours
  • Graphic: Assembly line icon showing script → editing → thumbnail → upload

With this skill, every script arrives pre-annotated with exact visual directions.

The skill is trained on traditional B-roll and animation styles, but you can customize it to match your specific editing workflow.

One thing I'm testing: feeding these annotations as prompts directly into AI video generators via their application programming interfaces. Claude generates the prompt, the generator produces the clip, the clips get stitched in editing software. It's not a complete pipeline yet, but it's close to one.

reddit.com
u/Born-Comfortable2868 — 4 days ago
▲ 42 r/clawdbot+1 crossposts

How to set up OpenClaw Agents that actually get better Over Time. Full Guide

This is the exact file structure behind an eight-agent OpenClaw setup that improves on its own over time. Three layers: identity, operations, and memory. All of it runs on markdown files.

Here's the Full Stack openClaw Guide.

The stack

Three layers make up the entire operating system:

  • Layer 1: Identity who is this agent (SOUL.md, IDENTITY.md, USER.md)
  • Layer 2: Operations how does this agent work (AGENTS.md, HEARTBEAT.md, role-specific guides)
  • Layer 3: Knowledge what has this agent learned (MEMORY.md, daily logs, shared-context/)

No orchestration framework. No message queues. No database. Markdown files on disk. The filesystem is the integration layer.

Layer 1: Identity

SOUL.md

Defines who the agent is, what it does, and how it behaves. Trimmed version of Dwight, the research agent:

# SOUL.md (Dwight)

## Core Identity
Dwight — the research brain. Named after Dwight Schrute because you share his
intensity: thorough to a fault, knows EVERYTHING in your domain, takes your job
extremely seriously. No fluff. No speculation. Just facts and sources.

## Your Role
You are the intelligence backbone of the squad. You research, verify, organize,
and deliver intel that other agents use to create content. You feed:
- Kelly (X/Twitter) — viral trends, hot threads, breaking news
- Rachel (LinkedIn) — thought leadership angles, industry news

## Your Principles
### 1. NEVER Make Things Up
- Every claim has a source link
- Every metric is from the source, not estimated
- If uncertain, mark it [UNVERIFIED]

### 2. Signal Over Noise
- Not everything trending matters
- Prioritize: relevance to AI/agents, engagement velocity, source credibility

The TV character trick: telling Claude "you have Dwight Schrute energy" loads seasons of character development from training data. Thorough, intense, takes the job dead seriously. No extra prompting required.

Keep SOUL.md under 60 lines. It loads every session. Too long and it eats context that should go to actual work.

Starter template:

# SOUL.md

## Core Identity
[Name] — [one-line description]. [Personality reference if helpful].

## Your Role
[What this agent does. Be specific. One job, not five.]

## Your Principles
1. [Most important rule]
2. [Second most important rule]
3. [Third most important rule]

## Relationships
[Who does this agent work with? Who consumes its output?]

Start with one agent. Pick the most repetitive daily task. Write a rough sketch. The first version will be mediocre. It gets rewritten multiple times as patterns emerge.

IDENTITY.md

SOUL.md is the full personality. IDENTITY.md is the business card. Name, role, vibe, one-liner.

# IDENTITY.md

- **Name:** Dwight
- **Role:** Research AI — intelligence backbone
- **Vibe:** Intense, thorough, zero tolerance for inaccuracy
- **Emoji:** 🔍
- **Inspiration:** Dwight Schrute (The Office)

Small file. Significant quality-of-life improvement when running eight agents.

USER.md

Every agent needs to know who it's helping. USER.md holds preferences, background, and context that shapes how the agent behaves.

# USER.md

- **Name:** [Name]
- **Timezone:** [Timezone]

## Context
[Role, relevant projects, anything agents need to calibrate around]

## Preferences
- Short paragraphs, punchy sentences
- No em dashes. Ever.
- Practical first, theory never

Write it once. Every agent reads it.

The personal details matter more than expected. Timezone means agents don't schedule things at 3 AM. Dietary preferences mean the newsletter agent doesn't suggest a steakhouse for a team dinner. These details compound across every session.

Layer 2: Operations

AGENTS.md

SOUL.md is who the agent is. AGENTS.md is how it operates. Session startup routines, file reading order, memory management, safety rules.

Root-level AGENTS.md that every agent inherits:

# AGENTS.md

## Every Session
Before doing anything else:
1. Read SOUL.md — this is who you are
2. Read USER.md — this is who you're helping
3. Read memory/YYYY-MM-DD.md (today + yesterday) for recent context
4. If in MAIN SESSION (direct chat): Also read MEMORY.md

## Memory
- Mental notes don't survive session restarts. Files do.
- When someone says "remember this" → update the memory file
- Text > Brain

## Safety
- Don't exfiltrate private data. Ever.
- trash > rm (recoverable beats gone forever)
- When in doubt, ask.

Each agent extends it. Kelly's AGENTS.md adds her specific workflow on top:

# AGENTS.md (Kelly)

## Every Session
Before doing anything:
1. Read SOUL.md
2. Read USER.md
3. Read X-ARTICLES-INSTRUCTIONS.md — master guide for writing style
4. Read X-ARTICLES-EXAMPLES.md — 5 real articles showing the style in action
5. Read X-CONTENT-GUIDE.md — post types and formats
6. Read intel/DAILY-INTEL.md — Dwight's research (your source material)
7. Read DAILY-ASSIGNMENT.md — your daily workflow
8. Read memory/YYYY-MM-DD.md for recent context

## Intel-Powered Workflow
You no longer do research. Dwight handles all research.
Your job: Read the intel → Craft X content → Deliver drafts

Agents have no memory between sessions. Everything starts fresh. If a correction doesn't reach a file, it doesn't exist next session. AGENTS.md makes this explicit so the agent writes everything down.

Specialist files are where agents get sharp. Kelly has six extra files beyond AGENTS.md: writing style guides, post format references, real examples, daily assignments. Start with AGENTS.md. Add specialist files only when a pattern keeps needing correction.

HEARTBEAT.md

Agent teams are infrastructure. Infrastructure breaks.

Monica's HEARTBEAT.md:

## Health Checks (run on each heartbeat)

**Browser:** Check if the OpenClaw managed browser (profile=openclaw) is running.
If running: false, start it. The browser has X account logged in.
Dwight depends on it for intel sweeps.

**Cron jobs:** Check if any daily jobs have stale lastRunAtMs (>26 hours).
If stale, trigger via CLI: openclaw cron run <jobId> --force

Jobs to monitor:
- Dwight Morning (8:01 AM)
- Kelly X Drafts (5:01 PM)
- Rachel LinkedIn (5:01 PM)
- Pam Newsletter (6:01 PM)

Only run each check once per heartbeat session.

Monica checks two things on every heartbeat: whether the browser is alive, and whether the cron jobs actually ran. They're connected. If the browser dies, Dwight can't do research sweeps. If Dwight misses a sweep, Kelly and Rachel draft from stale intel.

In week three, the scheduler had a bug. Jobs were advancing in the queue but never executing. Nothing surfaced for hours.

The heartbeat catches both failure modes in one place. Build it after the first failure. You'll know exactly what to monitor because you'll have felt what breaks.

Layer 3: Knowledge

Tier 1: MEMORY.md

Not raw logs. Not everything that ever happened. The stuff that matters.

From Monica's MEMORY.md:

# MEMORY.md

## Writing Preferences
- NO EM DASHES. Use colons, periods, or restructure.

## Hard Lessons
- NEVER delete project folders without asking. On Feb 26,
  deleted Ross's gemini-council React app during cleanup. The React
  version was lost. Always ask before removing anything in agent
  project directories.

## Memory System (2026-02-26)
- Tried self-hosted Mem0 (Ollama + SQLite) → crashes, stored nothing.
- Tried Mem0 hosted API → free tier too limited. Removed.
- Now using built-in memory-core: Gemini embeddings, hybrid search,
  temporal decay, maximum marginal relevance. No external dependencies.

The "Hard Lessons" section is the key pattern. Monica deleted a project folder. Now that mistake lives in long-term memory permanently. One correction, stored once, preventing the same error across every future session.

From Kelly's MEMORY.md:

## X Post Rules (ALWAYS)

### EXACT INSTRUCTIONS:
- Start with a strong hook
- Keep entire tweet SUPER SHORT (180 chars or less)
- NO hashtags, NO emojis
- NO fluffy marketing language
- Always deliver 3 drafts per topic

### BAD (what I did wrong)
[Lists every pattern that was rejected: bullets, arrows, LinkedIn tone]

Kelly wrote the "BAD" section herself after corrections. She catalogues her own mistakes so she doesn't repeat them. That section alone is worth more than any prompt engineering.

MEMORY.md only loads in direct sessions, not shared contexts like group chats. Keep sensitive preferences out of files that load everywhere.

Don't write MEMORY.md on day one. It grows from feedback. Give feedback, the agent logs it in the daily memory file, distill the important stuff into MEMORY.md, it loads every session, the correction never needs to be given again.

Tier 2: memory/YYYY-MM-DD.md

Raw notes. What happened today. What was drafted. What feedback came in.

# Kelly Daily Log — February 5, 2026

## 5:00 PM — Daily X Drafts

### What's HOT today
- Opus 4.6 vs GPT-5.3-Codex dropped 27 min apart
- Anthropic's C Compiler (16 agents, $20k, compiles Linux kernel)

### Drafts Submitted
1. C Compiler — single post, discovery format
2. Mitchell Hashimoto's 6 steps — thread format
3. Opus 4.6 vs GPT-5.3-Codex — hot take

### Awaiting
- Feedback on drafts

Daily logs are the raw material. MEMORY.md is the refined product. Both are necessary.

Daily logs accumulate fast. Without pruning, context balloons. Kelly's hit 161,000 tokens and output quality dropped. Compacting to 40,000 fixed it. Review and archive old daily logs regularly. Only load today's log plus yesterday's.

Tier 3: Organized memory folders

memory/
├── user/        # Private notes, work projects, ideas
├── shared/      # Joint context across agents
└── 2026-02-27.md   # Daily operational logs

Organize by person or project as the setup grows.

Shared context

A single folder every agent reads at session start:

shared-context/
├── THESIS.md        — current worldview and content gaps
├── FEEDBACK-LOG.md  — corrections that apply across agents
└── SIGNALS.md       — articles and trends being tracked

THESIS.md is the source of truth. Dwight reads it to prioritize research. Kelly reads it to match the content direction. Ryan reads it to propose articles. Every agent aligns to the same document.

FEEDBACK-LOG.md is the cross-agent correction layer. When Kelly gets told "no em dashes," that applies to Rachel, Ryan, and Pam too. Write it once, every agent reads it. One correction propagates everywhere instead of being repeated four times.

How agents coordinate

No Application Programming Interface calls between agents. No message queues. Just files.

Dwight writes research to intel/DAILY-INTEL.md. Kelly reads it. Rachel reads it. Pam reads it. The coordination is the filesystem.

One agent writes. Other agents read. The handoff is a markdown file on disk.

Never have two agents writing to the same file. Design every shared file with one writer and many readers. This prevents every coordination conflict you'd otherwise have to debug.

Scheduling makes this work. Dwight runs at 8 AM and 4 PM. Kelly and Rachel run at 5 PM. Dwight runs first because everyone depends on his output. Get the order wrong and downstream agents read stale or empty files.

Full directory structure

workspace/
├── SOUL.md              # Monica (main agent)
├── IDENTITY.md          # Monica's quick reference
├── AGENTS.md            # Root behavior rules (all agents inherit)
├── USER.md              # User context (shared across all agents)
├── MEMORY.md            # Monica's long-term memory
├── HEARTBEAT.md         # Self-healing checks
├── shared-context/
│   ├── THESIS.md        # Current worldview
│   ├── FEEDBACK-LOG.md  # Cross-agent corrections
│   └── SIGNALS.md       # Trends being tracked
├── intel/
│   ├── DAILY-INTEL.md   # Dwight's output (agents read this)
│   └── data/
├── agents/
│   ├── dwight/
│   │   ├── SOUL.md
│   │   ├── IDENTITY.md
│   │   ├── AGENTS.md
│   │   ├── TARGET-AUDIENCE.md
│   │   ├── RESEARCH-PROTOCOL.md
│   │   ├── HEARTBEAT.md
│   │   └── memory/
│   ├── kelly/
│   │   ├── SOUL.md
│   │   ├── IDENTITY.md
│   │   ├── AGENTS.md
│   │   ├── X-CONTENT-GUIDE.md
│   │   ├── X-ARTICLES-INSTRUCTIONS.md
│   │   ├── X-STRATEGY.md
│   │   ├── DAILY-ASSIGNMENT.md
│   │   └── memory/
│   ├── ross/
│   ├── rachel/
│   ├── pam/
│   ├── ryan/
│   └── chandler/
└── memory/
    ├── user/
    ├── shared/
    └── 2026-02-27.md

Why it works

The files aren't static. They evolve.

Kelly's SOUL.md on day one was a rough sketch. By day forty, it has specific voice examples, a list of rejected patterns she wrote herself, and a "NEVER SUGGEST AGAIN" section covering every topic already published.

Dwight's principles on day one said "find what's trending." By day ten, they said "If the target reader can't do something with it today, skip it." By day twenty, he'd added verification steps: check repo creation dates, check Show HN timestamps, trace discoveries to primary sources.

The shared-context layer didn't exist until day twenty. The same corrections were being repeated to multiple agents. Adding THESIS.md and FEEDBACK-LOG.md meant one correction propagated everywhere.

The model is the same on day one and day forty. It doesn't get smarter from repeated use. The files around it get richer, sharper, more specific. That accumulated context is what's hard to replicate.

How to start

First. Install OpenClaw. Write one SOUL.md, one IDENTITY.md, one USER.md. Pick the most repetitive daily task. Set up one cron job. Let it run.

After a few days. Output will be mediocre. Start giving specific feedback. Make sure feedback lands in a memory file, not just the chat.

Soon after. Create AGENTS.md. Define the session startup routine. Add the memory management rules.

Once patterns emerge. Start MEMORY.md. Review the daily logs. Which corrections keep recurring? Distill them into permanent entries.

When the second agent is ready. Set up file-based coordination: first agent writes to a shared file, second agent reads it. Add role-specific guides as patterns solidify.

When repeating the same correction to multiple agents. Build the shared-context layer. THESIS.md for current thinking. FEEDBACK-LOG.md for cross-agent corrections.

reddit.com
u/Born-Comfortable2868 — 6 days ago
OpenClaw Configuration. Full Guide.
🔥 Hot ▲ 166 r/AskClaw

OpenClaw Configuration. Full Guide.

here's full openclaw configuration.

Your workspace holds your agent's identity, operating instructions, custom skills, memory, and credentials. Once you understand what lives where, you can configure it exactly the way you need it.

Two config layers

OpenClaw has two configuration layers.

The first is your workspace folder, typically ~/clawd/ or wherever you initialized. This holds your personal configuration, skills, memory, and secrets.

The second is the global installation at opt/homebrew/lib/node_modules/clawdbot/ (or equivalent on your system). This contains the core docs, built-in skills, and default behaviors.

Your workspace is where you customize. The global install is what you update.

AGENTS.md: Your operating instructions

This is the most important file in the setup. When OpenClaw starts a session, the first thing it reads is AGENTS.md. It loads it into the system prompt and keeps it there for the entire conversation.

Whatever you write in AGENTS.md, your agent follows.

Tell it to check your calendar before suggesting meeting times, it will. Tell it never to send emails without your explicit approval, it will respect that every time.

What belongs in AGENTS.md:

Write:

  • Your core operating directive (what's the agent's job?)
  • Key tools and how to use them (email command-line interface, task management workflows)
  • Hard rules that should never be broken
  • Links to important resources (application programming interface docs, skill locations)
  • Workflow triggers (what should happen automatically)

Don't write:

  • Full documentation you can link to instead
  • Long paragraphs explaining theory
  • Information that changes frequently

Keep AGENTS.md under 300 lines. Files longer than that start eating too much context, and instruction adherence drops.

Here's a minimal but effective example:

# AGENTS.md - Clawdbot Workspace

## Core Operating Directive
My assistant operates as a proactive helper for my business.

**The Mandate:**
- Take as much off my plate as possible
- Be proactive - don't wait to be asked

## Key Tools
- **Email:** Check inbox daily, flag urgent items
- **Calendar:** Verify no conflicts before scheduling
- **Task Manager:** Sync completed work to dashboard

## Hard Rules
- NEVER restart services without explicit permission
- Always verify dates with `date` command before scheduling
- Log all completed work to tracking system

About 20 lines of core directives. Enough for your agent to work without constant clarification.

SOUL.md: Persona and boundaries

SOUL.md defines who your agent is and how it should communicate.

# SOUL.md - Persona & Boundaries

- Keep replies concise and direct.
- Ask clarifying questions when needed.
- Never send streaming/partial replies to external messaging surfaces.

If AGENTS.md is the instruction manual, SOUL.md is the personality profile. Some users write extensive personas here. Others keep it minimal. Both work depending on how much personality customization you want.

IDENTITY.md and USER.md: Who's who

These two files establish the relationship between your agent and you.

IDENTITY.md defines your agent:

# IDENTITY.md - Agent Identity

- Name: [Your Agent Name]
- Creature: [Optional: fun descriptor]
- Vibe: Clever, helpful, dependable, tenacious problem-solver
- Emoji: [Your chosen emoji]

USER.md defines you:

# USER.md - User Profile

- Name: [Your Name]
- Preferred address: [Nickname]
- Timezone: [Your timezone]
- Phone: [Your number]
- Calendar: [Calendar platform and email]

IDENTITY.md rarely changes once set. USER.md updates as your preferences evolve. Keeping them separate makes maintenance easier.

HEARTBEAT.md: Scheduled check-ins

This file controls what your agent does during heartbeat polls, the scheduled moments when the system checks if anything needs attention.

# HEARTBEAT.md

**If the message contains "⚠️ EXECUTION CRON":**
- This is NOT a heartbeat check - it's work to do
- Read the complete instructions in the message
- Execute all required steps
- Never return HEARTBEAT_OK

**Otherwise:**
- If nothing needs attention, return HEARTBEAT_OK
- Keep it minimal

Heartbeats are what make agents proactive. Your agent can check your calendar, scan your inbox, or run any automated workflow on a schedule, all triggered by heartbeat polls.

TOOLS.md: External tool documentation

This file holds your notes about external tools and conventions. It doesn't define which tools exist (OpenClaw handles that internally), but it's where you document how you use them.

# TOOLS.md - User Tool Notes

## Email CLI
- Account: your@email.com
- Common: `cli email search`, `cli calendar create`

## Task Management API
- Token location: ~/.secrets/task-api-token.txt
- Always use `limit=100` to avoid pagination issues
- Status conversion: [document your specific workflow]

## Web scraping tool (when sites block normal requests)
- Location: ~/clawd/.venv/scraper
- Use stealth mode for protected sites

When your agent needs to use a tool, it checks TOOLS.md for your specific conventions.

The skills/ folder: Reusable workflows

Skills are where OpenClaw gets powerful. Each skill is a self-contained workflow your agent can invoke when the task matches the skill's description.

Every skill lives in its own subdirectory with a SKILL.md file:

skills/
├── meeting-prep/
│   └── SKILL.md
├── social-post-writer/
│   ├── SKILL.md
│   └── references/
│       └── templates.md
└── podcast-show-notes/
    └── SKILL.md

A SKILL.md uses YAML frontmatter to describe when to use it:

---
name: meeting-prep
description: Automated pre-call research and briefing. Use when user asks to check for upcoming meetings or run meeting prep.
---

# Meeting Prep Skill

## When to trigger:
- User asks to check for upcoming meetings
- Scheduled to run every 10 minutes via cron
- Only briefs for FIRST-TIME CALLS

## Process:
1. Check calendar for meetings in next 15-45 minutes
2. Research attendees (LinkedIn, company website)
3. Create briefing document
4. Attach link to calendar event

The key difference from static instructions: skills can bundle supporting files alongside them. The references/templates.md file can hold example outputs, proven formats, or additional context that only loads when the skill is active.

The memory/ folder: Persistent context

This is your agent's long-term memory. Session transcripts, learned preferences, and daily logs all live here.

memory/
├── 2026-03-22.md
├── 2026-03-21.md
├── email-style.md
├── business-context-2026-02.md
└── workflow-playbook.md

Good practices:

  • Keep daily logs at memory/YYYY-MM-DD.md
  • On session start, your agent reads today and yesterday if present
  • Capture durable facts, preferences, and decisions
  • Avoid storing secrets

Without memory files, every conversation starts from zero.

The .secrets/ folder: Credentials

Application programming interface tokens, passwords, and sensitive configuration live here. This folder should be gitignored and never committed.

.secrets/
├── task-api-token.txt
├── openai-api-key.txt
├── anthropic-api-key.txt
└── service-credentials.txt

Your AGENTS.md or TOOLS.md can reference these paths without exposing the actual values.

Channel configuration: Where you talk to your agent

OpenClaw supports multiple messaging channels: Discord, Telegram, Signal, WhatsApp, and more. Channel config lives in your gateway configuration, managed via clawdbot gateway commands.

Key concepts:

  • Each channel has its own session behavior
  • You can have multiple channels active simultaneously
  • Channel-specific rules (like "don't respond unless mentioned in Discord") go in AGENTS.md

The full picture

Here's how everything comes together:

~/clawd/
├── AGENTS.md           # Operating instructions (daily use)
├── SOUL.md             # Persona and boundaries
├── IDENTITY.md         # Agent identity
├── USER.md             # Your profile
├── HEARTBEAT.md        # Scheduled check-in behavior
├── TOOLS.md            # External tool documentation
├── BOOTSTRAP.md        # First-run ritual (delete after)
│
├── skills/             # Custom workflows
│   ├── meeting-prep/
│   ├── podcast-show-notes/
│   └── social-post-writer/
│
├── memory/             # Persistent context
│   ├── 2026-03-22.md
│   └── key-context.md
│
├── .secrets/           # API keys and credentials (gitignored)
│   ├── task-api-token.txt
│   └── anthropic-api-key.txt
│
└── output/             # Generated files and exports

A practical setup to get started

Step 1. Run openclaw onboard and complete the initial setup. Choose your model provider and connect at least one channel.

Step 2. Create your workspace folder and add AGENTS.md with your core directives. Start with 10-20 lines covering your main use cases.

Step 3. Add IDENTITY.md, USER.md, and SOUL.md. These establish the relationship and personality.

Step 4. Create your first skill for a workflow you do repeatedly. Meeting prep, email drafts, or content creation are good starting points.

Step 5. Set up memory/ with a daily log template. Your agent will start building context over time.

Step 6. Add TOOLS.md as you discover tool-specific patterns worth documenting.

That covers 95% of use cases. Advanced features like cron jobs, multi-agent setups, and custom channel plugins come in when you have specific workflows worth automating.

The key insight

OpenClaw workspace is a protocol for telling your agent who you are, what you need done, and what rules to follow. The clearer you define that, the less time you spend correcting it.

AGENTS.md is your highest-priority file. Get that right first. Everything else is optimization.

u/Born-Comfortable2868 — 11 days ago