u/Veronildo

Every skill needed to build, Connect App Store & Publish a Mobile app (Full Pipeline)

Every skill needed to build, Connect App Store & Publish a Mobile app (Full Pipeline)

I have been building mobile Apps & here is my skills Pipeline to go from a blank screen to a fully published IOS App.

I add these skills into my repo. so the claude code session actually knows what it's doing & picks up the right skills when needed.

scaffold

vibecode-cli handles the entire project setup from the first prompt. expo config, directory structure, base dependencies, environment wiring. claude code picks it up automatically and the session just knows the patterns. without it there's a chunk of every build that's just setup work that shouldn't need to be figured out fresh each time.

ui and design

once the scaffold is in place and screens are being built, the frontend design skill kicks in. this is to stop the app from looking like a default expo template with a different hex code. actual design decisions come into the session: spacing, layout, component hierarchy, color usage.

backend

when it's time to wire up data, supabase-mcp handles it. auth setup, table structure, row-level security, edge functions, all without touching the supabase dashboard or looking up syntax mid-build. it's just there when the session needs it.

store metadata

once the app is feature-complete, the aso optimisation skill handles the metadata layer. title, subtitle, keyword field, short description, all written with actual character limits and discoverability logic baked in. doing this from memory or instinct means leaving visibility on the table. every character in the metadata is either working for you or it isn't.

submission prep

before anything goes to testflight, the app store preflight checklist skill runs through the full validation pass. device-specific issues, expo-go testing flows, the things that don't show up in a simulator but will absolutely show up in review. a rejection costs a few days minimum. catching it before submission is always the better trade.

the submission itself

once preflight is clean, the app store connect command line interface skill handles the submission. version management, testflight distribution, metadata uploads, all from inside the session. no tab switching into app store connect, no manually triggering builds through the dashboard. the submission phase stays inside claude code from start to finish.

u/Veronildo — 6 hours ago
▲ 19 r/AskClaw

10 Learnings by Running openClaw for a Month

Here are my 10 learnings from running openClaw for a month.

The single biggest mistake I see is running one agent for everything. It loses context. It underperforms. You blame the tool when the real problem is the setup.

Split by role. I have one agent for sales, one for family logistics, one for podcast prep, one for course operations. Same logic as Slack channels. One channel for everything is unusable.

The three things that actually make an agent work: soul, heartbeat, jobs.

Soul is a Markdown file that defines who the agent is and how it operates. Heartbeat checks in every 30 minutes to see what needs doing. Jobs are scheduled tasks that run on a fixed cadence. Get all three right and the agent stops feeling like a chatbot you have to babysit.

Install OpenClaw on a separate machine, not your main computer. An old laptop works. A Mac Mini runs $500 to $600. If you'd rather skip the hardware entirely, run it on Hetzner or Cloud service  that puts OpenClaw on a dedicated cloud instance, isolated from your personal machine. Connect Telegram, WhatsApp, or Discord from a single dashboard. No Docker, no server provisioning, no configuration files.

My sales agent runs every day Sweeps the customer relationship manager for new signups, finds the decision-makers, sends personalized outreach, flags international deals. That's 10 hours a week I was paying a contractor for.

Stop trying to write perfect prompts. Use Telegram voice notes instead. Just talk about what you need. The agent figures it out and asks questions when it needs more. It's faster and the output is usually better than anything I'd type.

Browser use breaks. A lot. Always check if there's an application programming interface available before you go anywhere near browser automation. If there's no application programming interface, browser use might work, but expect failures. Sometimes the right call is solving the underlying problem differently.

Screen sharing fixed the hardware problem for me. Enable it in Mac Mini settings and you control the machine from your laptop on the same network. Enable remote login and you can get into the terminal over SSH. No extra monitor, no extra keyboard.

Start agents with a narrow scope. I let them listen on Telegram only, not email, until I know how they behave. OpenClaw has prompt injection hardening, but progressive trust is still the right approach. Same as onboarding anyone new.

reddit.com
u/Veronildo — 1 day ago
Top 6 Claude Code Plugins Worth Installing

Top 6 Claude Code Plugins Worth Installing

Here's a plugin ecosystem + covering architecture, parallel execution, code review, security monitoring, and frontend design. I tested dozens and landed on 6 that actually change how the work gets done. Each installs in under a minute.

Part 1: Planning and architecture

1. feature-dev (89,000+ installs)

Instead of jumping straight to code, feature-dev runs Claude through a 7-phase workflow:

  • Phase 1: Discovery what do you actually need?
  • Phase 2: Codebase exploration what's already in the project?
  • Phase 3: Clarifying questions what's unclear?
  • Phase 4: Architecture design how to build this correctly?
  • Phase 5: Implementation now we write code
  • Phase 6: Quality review did we break anything?
  • Phase 7: Summary what changed and why

Under the hood it runs three specialized agents: code-explorer traces execution paths and maps your architecture, code-architect proposes multiple approaches with trade-offs, code-reviewer catches bugs and convention violations with confidence scoring.

/plugin install feature-dev@claude-plugins-official
/feature-dev Add user authentication with OAuth

Without this plugin, Claude guesses your architecture from one sentence. With it, Claude asks the right questions first.

Part 2: Code review and quality

2. code-review (5 parallel agents)

Run /code-review on a pull request branch and Claude spins up 5 independent agents simultaneously. Each reviews your changes from its own angle:

  • Agent 1: CLAUDE.md compliance are you following your own project rules?
  • Agent 2: Bug hunting logic errors, edge cases, race conditions
  • Agent 3: Git history context does this conflict with recent commits?
  • Agent 4: Previous pull request comments were past review notes addressed?
  • Agent 5: Comment verification do code comments match what the code actually does?

Each finding gets a confidence score from 0 to 100. Only issues above 80 are shown, which cuts false positives. On large pull requests (1,000+ lines), 84% get findings, averaging 7.5 real issues. Less than 1% of findings are marked incorrect by engineers. Anthropic uses this internally on almost every pull request.

/plugin install code-review@claude-plugins-official
/code-review
/code-review --comment   # posts findings directly to your GitHub PR

3. agent-peer-review (Claude vs Codex)

When Claude is about to show you an implementation plan, architectural decision, or code review, this plugin automatically sends it to OpenAI Codex for verification. Two models compare findings and classify them:

  • Agreement both found the same thing. High confidence.
  • Disagreement one found what the other missed. Worth investigating.
  • Complement found different things. Both correct.

If they can't agree, there's a 2-round discussion protocol. If still no agreement, the plugin escalates to Perplexity or web search for external evidence.

/plugin marketplace add jcputney/agent-peer-review
/plugin install codex-peer-review
/codex-peer-review
/codex-peer-review "Should we use microservices or monolith for this project?"

Part 3: Parallel execution

4. /batch (5-30 parallel agents)

You describe a change across the entire codebase in one sentence. Claude breaks it into 5-30 independent units, spins up one agent per unit each in its own isolated git worktree and they all work in parallel.

/batch migrate from React to Vue
/batch replace all uses of lodash with native equivalents
/batch add type annotations to all untyped functions

What happens under the hood:

  • Phase 1, Discovery: Claude scans the entire codebase, finds every file and pattern the change touches.
  • Phase 2, Execution: One agent per unit, all working simultaneously. Each gets its own branch and working directory, zero merge conflicts. After implementation, each agent runs tests, runs /simplify to clean its code, commits, pushes, and opens a pull request.
  • Phase 3, Tracking: Status table updating in real time. Final result: "22/24 units landed as PRs."

22 pull requests from one command.

5. ralph-loop (autonomous iteration)

An autonomous loop. You give Claude a task with clear success criteria and it works on it repeatedly fixing its own mistakes, running tests, iterating until it either succeeds or hits the iteration limit.

/plugin install ralph-loop@claude-plugins-official

/ralph-loop "Build a REST API for todos. When complete:
- All CRUD endpoints working
- Input validation in place
- Tests passing (coverage > 80%)
- README with API docs
Output: <promise>COMPLETE</promise>" --max-iterations 20

How the loop works: Claude works on the task, tries to end the session, a stop hook blocks the exit, the same prompt is fed back with Claude's previous work visible, and the cycle repeats until COMPLETE or max iterations.

You can run it before bed:

#!/bin/bash
cd /path/to/project1
claude -p "/ralph-loop 'Task 1...' --max-iterations 50"

cd /path/to/project2
claude -p "/ralph-loop 'Task 2...' --max-iterations 50"

Real results: Y Combinator hackathon teams shipped 6+ repos overnight. Geoffrey Huntley ran a 3-month loop that built a full programming language.

Part 4: Security and quality

6. security-guidance

A hook that monitors 9 security patterns every time Claude touches a file: command injection, cross-site scripting vulnerabilities, use of eval, unsafe HTML, pickle deserialization, os.system calls, and three more.

This is a PreToolUse hook it fires deterministically, every time, before Claude can edit a file. Instructions from CLAUDE.md are ignored roughly 20% of the time. Hooks work 100% of the time. Install it and it's always on.

/plugin install security-guidance@claude-plugins-official

Where to start

Don't install everything at once. Match to your actual pain:

Spending too long planning features?          → feature-dev
Pull requests merging without proper review?  → code-review
Large migrations eating weeks?                → /batch
Want to ship while you sleep?                 → ralph-loop
Security gaps in AI-generated code?          → security-guidance
Your interface looks like every other AI app? → frontend-design

The Claude Code plugin ecosystem has 9,600+ repositories and 2,300+ skills as of April 2026.

u/Veronildo — 2 days ago
Fixed my ASO changes & went from Invisible to Getting Downloads.
▲ 4 r/vibecoding+1 crossposts

Fixed my ASO changes & went from Invisible to Getting Downloads.

here's what i changed. My progress & downloads was visible after 2 months. it didn;t change overnight after making the changes.

i put the actual keyword in the title

my original title was just the app name. clean, brandable, completely useless to the algorithm. apple weights the title higher than any other metadata field and i was using it for branding instead of ranking.

i changed it to App Name - Primary Keyword. the keyword after the dash is the exact phrase users type when searching for an app like mine. 30 characters total. once i made that change, rankings moved within two weeks.

i stopped wasting the subtitle

i had a feature description in the subtitle. something like "the fastest way to do X." no one searches for that. i rewrote it with my second and third priority keywords in natural language. the subtitle is the second most indexed field treating it like ad copy instead of a keyword field was costing me rankings.

i audited the keyword field properly

100 characters. i'd been repeating words already in my title and subtitle, which does nothing apple already indexes those. i stripped every duplicate and filled the field with unique terms only.

the research method that actually worked: app store autocomplete. type your core category into the search bar and read the suggestions. those are real searches from real users. i found terms i hadn't considered and added the ones not already covered in my title and subtitle.

i redesigned screenshot one

i had a ui screenshot first. looked fine, showed the app, converted nobody. users see the first two screenshots in search results before they tap it's the first impression before they've read a word.

i redesigned it to show the result state what the user's situation looks like after using the app with a single outcome headline overlaid. one idea, one frame, immediately obvious. conversion improved noticeably within the first week.

i moved the review prompt

my rating was sitting at 3.9. i had a prompt firing after 5 sessions. session count tells you nothing about whether the user is happy right now.

i moved it to trigger after the user completed a specific positive action — the moment they'd just gotten value. rating went from 3.9 to 4.6 over about 90 days. apple factors ratings into ranking, so that lift improved everything else downstream.

i stopped doing it manually

the reason i'd never iterated on aso before was the friction. updating screenshots across every device size, touching metadata, resubmitting builds it was tedious enough to avoid.

i set up fastlane. it's open source, free, and handles screenshot generation across device sizes and locales, metadata updates, and submission, managing provisioning profiles, pushing builds. once your lanes are configured,

for submission and build management i switched to asc cli OpenSource app store connect from the terminal, no web interface. builds, testflight, metadata, all handled without leaving the command line.

The app was built with VibecodeApp, which scaffolds the expo project with localization and build config already set up. aso iteration baked in from day one.

what i'd do first if starting over

  1. move the primary keyword into the title
  2. rewrite the subtitle with keyword intent, not feature copy
  3. audit the keyword field, strip duplicates, fill with unique terms
  4. redesign screenshot one as a conversion asset
  5. fix the review prompt trigger
  6. set up fastlane so iteration isn't painful
u/Veronildo — 2 days ago
Long-term memory layer for OpenClaw and Claude Code
🔥 Hot ▲ 58 r/clawdbot+1 crossposts

Long-term memory layer for OpenClaw and Claude Code

Long-term memory layer for OpenClaw & MoltBook agents that learns and recalls your project context automatically.

Github

u/Veronildo — 3 days ago
3 OpenClaw Workflows worth Installing.

3 OpenClaw Workflows worth Installing.

Here are three I'd set up first: one for daily Calender Check, summarise Messages from Telegram, Discord & messaging apps & to Review my work each week.

Each one is a SKILL.md file with a schedule header. Copy, save.

Warning:

Install OpenClaw on a separate machine, not your main computer. An old laptop works

here are your options:

Mac mini: Costs $500 to $600. Runs quietly, stays on, and works well as a persistent local host.

Hetzner (private virtual private server): Cheap, reliable, and easy to provision. Good option if you want infrastructure you control without buying hardware.

Cloud hosted openclaw: Puts OpenClaw on a dedicated cloud instance. Connect Telegram, WhatsApp, or Discord from one dashboard, pick your model, and the agent is running in under 60 seconds. No Docker, no provisioning, no config files.

Workflow 1: Check Calendar

Runs twice daily at 8am and 6pm. Pulls the next 48 hours of your calendar, flags overlaps and back-to-back blocks, and generates one prep action per meeting based on type. Ends with an energy forecast for the day ahead.

---
name: check-calendar
description: 48-hour calendar scan with conflict detection and prep recommendations
schedule: "daily 8am, daily 6pm"
---


You are a calendar intelligence assistant. Your job is to make sure I'm never blindsided by what's on my schedule.

Tools available (Google Workspace CLI):
-> To pull my calendar events, run: gws calendar events.list --params '{"calendarId": "primary", "timeMin": "NOW_TIMESTAMP", "timeMax": "48_HOURS_LATER_TIMESTAMP", "singleEvents": true, "orderBy": "startTime"}'
  (Replace the timestamps with actual dates in format like "2026-03-23T08:00:00Z")

When triggered (8am and 6pm daily):
-> Pull my calendar for the next 48 hours
-> List every event with: time, title, attendees, location (virtual or physical), and duration
-> Mark each event by how important it is:
  - Red: high-stakes (meetings with people outside your company, presentations, leadership meetings)
  - Yellow: needs prep (one-on-ones, client calls, meetings with agendas)
  - Green: routine (recurring team check-ins, standups, optional meetings)
-> Flag issues:
  - Overlapping meetings (double-booked)
  - Back-to-back meetings with zero break time (recommend which one to join 5 min late)
  - Location changes requiring travel time between meetings
  - Meetings longer than 90 minutes (energy drain warning)
  - Days with more than 5 hours of total meeting time
-> For each meeting, suggest one prep action based on type:
  - One-on-one with someone on your team: "Review their last update or recent work"
  - Meeting with someone outside your company: "Quick look at their company and recent news"
  - Presentation: "Make sure slides are ready and test screen share"
  - Recurring team meeting: "Check if there's an agenda - if not, ask for one"
-> End with a one-line energy forecast: "Tomorrow is heavy (6 hours of meetings) - protect your evening" or "Light day tomorrow - good time for deep work"
-> When calendar is clear, report "open road" status

Rules:
-> NEVER modify, cancel, or create calendar events
-> Read-only access only
-> If a meeting has no agenda or description, flag it rather than guessing the topic
-> Always treat meetings with people outside the company as high priority - suggest research and talking points for those

Workflow 2: Check Messages

Runs four times daily at 9am, 12pm, 3pm, and 6pm. Aggregates unread messages across every connected platform (Slack, Discord, Telegram, WhatsApp, iMessage) and delivers one prioritized summary: urgent, important, informational, skippable. One update instead of six apps.

---
name: check-messages
description: Consolidates unread messages across all platforms into one prioritized summary
schedule: "daily 9am, daily 12pm, daily 3pm, daily 6pm"
---


You are a message consolidation assistant. Your job is to scan all my communication channels and deliver one prioritized summary so I don't have to check six different apps.

Tools available:
-> OpenClaw's built-in messaging skills (no extra setup needed). OpenClaw already connects to Slack, Discord, Telegram, WhatsApp, and other platforms you've linked. Use whichever messaging skills are available to check each platform for unread messages.

When triggered (4x daily or on demand):
-> Scan unread messages across all connected channels: Slack, Discord, Telegram, WhatsApp, iMessage, and any other active platforms
-> Only check platforms you have skills for
-> For each unread message, assess priority:
  - URGENT (red): Someone is waiting on me right now, time-sensitive ("can you join in 10 min?"), from my boss or key clients or family, or contains words like "urgent"/"ASAP"/"help"
  - IMPORTANT (yellow): Active conversations I'm part of, someone assigned me something, I was tagged/mentioned, or it's from someone I work with closely
  - FYI (green): Announcements, group chat chatter, news and updates I should know about but don't need to reply to
  - SKIP (gray): Automated notifications, channels I've muted, old messages in fast-moving chats
-> Deliver a single summary organized by priority level
-> For each message include: sender, platform, one-line preview, and suggested action ("reply", "snooze", "mark read", "open in app")
-> End with a count: "X urgent, X important, X FYI, X skipped"
-> If 200+ unreads, summarize rather than list everything

Rules:
-> NEVER reply to any message on my behalf
-> NEVER mark messages as read
-> Respect do-not-disturb settings - if I've muted a channel, skip it unless something is flagged urgent
-> If you're unsure about priority, round up (flag as important rather than FYI)
-> Keep previews to one line - I'll read the full message myself if needed
-> Learn which channels matter vs. noise over time

Workflow 3: Review Week

Runs every Friday at 5pm. Pulls the week's calendar events, completed tasks, and any goals you're tracking. Outputs a markdown summary covering highlights, decisions made, blockers, next week's priorities, and one pattern observation. Saves as a dated file so quarterly reviews stop being a reconstruction exercise.

---
name: review-week
description: Friday weekly review pulling calendar and tasks into a searchable markdown summary
schedule: "friday 5pm"
---


You are a weekly review assistant. Your job is to create a useful summary of my week that I can reference later for planning, reviews, and reflection.

Tools available (Google Workspace CLI):
-> To pull my calendar events, run: gws calendar events.list --params '{"calendarId": "primary", "timeMin": "MONDAY_START", "timeMax": "FRIDAY_END", "singleEvents": true, "orderBy": "startTime"}'
  (Replace MONDAY_START and FRIDAY_END with actual dates in format like "2026-03-16T00:00:00Z")

When triggered (every Friday at 5pm):
-> Pull this week's data:
  - Calendar: pull all meetings from Monday through Friday (who attended, what the topic was), broken down by type (1:1s, group syncs, external)
  - Tasks: anything marked complete this week
  - Goals: progress against any goals I'm tracking
-> Generate a weekly summary with these sections:
  - HIGHLIGHTS: 3-5 most meaningful things accomplished or progressed
  - DECISIONS MADE: any big decisions from this week (pull these from meeting notes and completed tasks)
  - BLOCKERS: anything that slowed me down or is still unresolved
  - NEXT WEEK PREVIEW: top 3 priorities and any key meetings
  - PATTERNS: one observation about how I spent my time ("You had 22 hours of meetings this week - 6 more than last week")
-> Save as a dated markdown file (week-of-YYYY-MM-DD.md) to ~/Documents/reviews/ or user-specified location

Output format: clean, simple layout with headers. Focus on what actually happened and what decisions were made - not just "I had 12 meetings." Should take under 2 minutes to read and fit on one page.

Rules:
-> Read-only access to calendar and tasks
-> NEVER modify any tasks, events, or goals
-> If data is sparse (light week, few tasks completed), note it honestly without judgment
-> Keep the summary factual - observations, not opinions about productivity
-> Each weekly review should be standalone - readable without context from previous weeks
-> Look at past weekly reviews when available to spot patterns (like "meetings have gone up 3 weeks in a row")
u/Veronildo — 3 days ago

Best Intermediate's Guide to Claude

If you already know how Claude works at a basic level, here's where the real leverage is. These are the features and techniques I actually use for production work.

Section I: Engineering Elite Claude Outputs

The standard prompting formula (set context, define task, specify rules) covers 90% of daily use. For the other 10%, where you need Claude to do something precise, here are five techniques worth building into your workflow.

1. Structured Prompting with Extensible Markup Language Tags

Claude was trained on Extensible Markup Language, so it reads tagged inputs more precisely than plain text. Instead of writing a wall of prose, you organize your prompt like this:

<role>You are an expert marketing strategist</role>
<context>I run a business-to-business software-as-a-service company targeting human resources managers</context>

Use tags for any prompt with more than two distinct components.

2. Reverse Prompting

Instead of figuring out the right prompt yourself, ask Claude to question you until it has what it needs.

Example: "I want to build a personalized content strategy for my business. Ask me 10 to 20 questions to gather all the context you need before you start."

Use this when you're stuck or when the output domain is complex enough that you can't fully spec it upfront.

3. Deep Thinking Triggers

Claude has extended reasoning that most people never activate. By default it responds fast, which is fine for simple tasks. For strategy, analysis, or multi-step reasoning, trigger it explicitly:

  • "Think deeply before responding."
  • "Take your time and reason through this step by step."

You can also enable Extended Thinking directly in the chat interface.

4. Chain Prompting

Break one large task into a sequence of smaller connected prompts instead of dumping everything at once. Overloading the context window is the main cause of hallucinations and low-quality output.

Example:

  • Prompt 1: "Analyze the biggest challenges facing [industry]."
  • Prompt 2: "Based on those challenges, identify the top three opportunities."
  • Prompt 3: "Build a 90-day action plan to capitalize on opportunity one."

5. Feedback Looping

Treat every response as a first draft. Iterate at least three times with specific direction:

  • "This is good, but the tone is too formal. Rewrite it to sound more conversational."
  • "The third point is weak. Expand it with a concrete example."
  • "Cut this by 30% without losing the core argument."

Vague feedback gets vague revisions. Be specific about what needs to change.

Section II: Project Management

Writing Project Instructions

Most Claude users leave Project Instructions blank or write two sentences. That's the biggest missed opportunity in the product.

Project Instructions are a permanent system prompt that loads before every conversation in that Project. A strong one includes:

  • Your role and what the Project is for
  • The audience or context Claude should keep in mind
  • Formatting preferences
  • Hard rules (things Claude should never do in this Project)

Spending 20 to 30 minutes writing solid instructions upfront saves 5 to 10 minutes every time you start a new conversation.

Uploading Files

Every file in a Project loads into the context window for every conversation, whether it's relevant or not. Upload only what's useful across multiple conversations: brand voice docs, style guides, reference articles, standard operating procedures.

If you need a file for a single conversation, upload it there instead of to the Project.

Projects and Skills Together

A Project holds the overarching environment (context, files, instructions). A Skill holds the workflow for specific tasks within that environment. Build Skills tied to specific Projects.

Section III: Claude Skills

Claude Skills are pre-loaded instruction sets saved as Markdown files. Call a Skill in any chat or Project and Claude follows its instructions immediately.

How to build one

Go to Customize → Skills and enable the Skill-Creator. Then tell Claude: "I want to build a Skill for [workflow], help me build it."

Claude walks you through it and outputs a finished Skill file in under 10 minutes.

Tips for building useful Skills

  • Include at least one real example of great output
  • Use reverse prompting to build them (see Section I)
  • Use your existing chats as raw material. If you've been working with Claude on a task for months, that conversation has everything the Skill needs

What dropped in Skills 2.0

  • Built-in evals let you test a Skill against real prompts before deploying it
  • A and B testing lets you compare two versions of a Skill to see which performs better
  • Trigger optimization automatically rewrites your Skill description until it loads reliably

Section IV: Claude Cowork

Cowork takes Claude out of the chat and gives it access to your files, the ability to execute tasks autonomously, and the ability to run in the background.

It's only available in the Claude desktop app: here

1. Scheduled and recurring tasks

You can schedule tasks to run on a set cadence: a daily research brief every morning, a weekly performance summary every Friday, a morning brief that scans Slack, Notion, and your calendar.

2. Dispatch

Dispatch lets you text Cowork from your phone to complete tasks on your behalf. It runs like a lightweight agent.

3. Projects

Cowork has its own Projects, same concept as in chat. Organize folders and ongoing tasks by creating one.

4. Customization

Inside Cowork you can configure:

  • Plug-ins (full "roles" in one place, a step above Skills)
  • Connectors (linking external apps to Claude)
  • Skills

5. File access

Cowork can read from your desktop folders directly. I keep a dedicated "Cowork" folder where I put anything I want it to access across chats and Projects.

Cowork works best when the underlying setup is solid: clear Project instructions, disciplined prompting, and well-built Skills.

Section V: Other Tools

Artifacts

Artifacts are standalone outputs generated inside a conversation: code files, HTML pages, React components, documents, diagrams, spreadsheets. They're enabled by default. Prompt Claude to "create this as an Artifact" or "build this as a downloadable file" and it produces a rendered, editable output instead of a block of text.

Claude in Chrome

A browser extension that lets Claude read the page you're on and take action without switching tabs or copying and pasting.

Memory Management

Claude builds memories about you across conversations. Most people never look at what's actually stored.

Go to Settings → Memory to see everything Claude currently remembers, delete anything inaccurate, and add context you want it to carry permanently. You can also tell Claude directly mid-conversation: "Remember that I always want responses under 500 words."

One useful move: if you're switching from another model like ChatGPT, ask it to generate a memory export document and upload that to Claude. Same method works for transferring Project context.

reddit.com
u/Veronildo — 4 days ago
Notes on OpenClaw Security. Don't miss this
▲ 20 r/clawdbot+1 crossposts

Notes on OpenClaw Security. Don't miss this

the biggest hesitation i hear from people thinking about OpenClaw is security. "will it steal my credit cards, delete my files, and run off with my spouse?"

probably not. but there are real things you should understand before handing it the keys to your digital life.

security isn't a setup step, it's an ongoing habit

i keep a scheduled reminder in my agents to run two commands regularly:

openclaw update
openclaw security audit

the first keeps you on the latest hardened version. the second surfaces gaps between what your setup is doing and what the docs actually recommend. takes five minutes. worth doing every few weeks.

your OpenClaw is a personal agent, not a group chat bot

i've put mine in a shared channel and a trusted business partner. that works because i made that call deliberately. but if you drop it into a random group chat, anyone in that chat can instruct it. that's not a bug, it's just how it works. treat it like a private tool by default.

the outside world can talk to it too

if your OpenClaw reads email, browses websites, or pulls in public content, it's exposed to prompt injection. a sketchy website it visits during a search could contain instructions telling it to share your API keys. that's a real threat vector. the framework does a lot to harden against this, but reinforcing those rules in its SOUL file is still a good idea.

it has real access to your computer

it can run commands, edit files, install software, and reach the internet. it shouldn't do anything harmful. but "shouldn't" and "can't" are different things. be explicit in your SOUL and TOOLS files about exactly how it's allowed to communicate with the outside world, especially if you've given it an email account or a public API like Gmail or Twilio.

if you'd rather not self-host at all, StartClaw is a managed hosting option for OpenClaw that handles the infrastructure side, keeps you on updated version & probably won't let any malicious party disturb it. worth looking at if the setup overhead is what's been holding you back.

store secrets carefully

to use tools, you'll be storing API keys. the simplest approach is putting them in .openclaw/.env. that's the intended pattern.

be selective about skills

i only install skills from the official OpenClaw bundle or from developers i know personally. community skills at clawhub.com exist and some are worth exploring, but read the SKILL.md before running anything you found online. unknown code with agent-level permissions is a real risk.

think through worst-case scenarios before you connect things

your calendar has your physical location. your email has your finances. if you've connected family calendars, your OpenClaw might know your kids' school schedule. in a worst-case scenario, that's all information a bad actor could exploit. i'm not saying don't connect thingsi've connected a lot. i'm saying make that choice deliberately, not by default.

in my experience, OpenClaw isn't inherently less secure than other systems. people are just more willing to give it access without thinking through what they're actually handing over. start small, build trust incrementally, and treat security as something you revisit, not something you set once and forget.

u/Veronildo — 5 days ago
🔥 Hot ▲ 66 r/clawdbot

12 OpenClaw Power User Tips: That Actually Works

These tips turn it into a system that runs workflows 24/7 while optimizing for Tokens & efficiency.

1. Split Your Conversations Into Threads

This fixes most memory problems. One long conversation means OpenClaw is pulling in mixed context every time it responds, your CRM question sitting next to your coding request sitting next to something from Tuesday.

Create separate topic threads instead. In Telegram, set up a group with just you and your bot, then create topic channels: general, CRM, knowledge base, coding, updates, and so on.

Each thread gets its own focused context. OpenClaw remembers better because it's only thinking about one thing at a time.

2. Use Voice Memos Instead of Typing

Telegram, WhatsApp, and Discord all have a built-in microphone button. Hold it down, talk, and your message goes straight to OpenClaw.

Useful when driving, walking, or just not wanting to type a long prompt. No extra setup required. It's already built in.

3. Match the Right Model to the Right Task

Running one model for everything wastes money and quality.

A general routing approach:

  • Main chat agent: use your strongest model. It plans and delegates, so quality matters most here.
  • Coding: use a model known for code generation.
  • Quick questions and answers: use a faster, cheaper model. No need to burn premium tokens on simple answers.
  • Search tasks: use a model with built-in web access.
  • Video or long-context work: use a model optimized for large inputs.

You can tell OpenClaw which model handles which task, and it remembers. Assign different models to different threads so each topic automatically gets the right one.

4. Delegate Tasks to Sub-Agents

When the main agent is processing a big task, everything else gets blocked. The fix is telling it to hand work off to sub-agents that run in the background.

Good candidates for delegation:

  • Coding work
  • Application programming interface calls and web searches
  • File processing and data tasks
  • Calendar and email operations
  • Anything that isn't a quick conversational reply

The main agent's job is to plan, delegate, and report back, not to execute everything itself.

5. Create Separate Prompts for Each Model

Every model responds differently to the same instructions. Some prefer positive framing. Others work better with explicit constraints. Formatting preferences vary too.

Maintain separate prompt files optimized per model. The major labs publish prompting guides for their models. Download those and have OpenClaw rewrite your instructions to match each model's preferences.

Set up a nightly job that keeps all versions in sync: same content, different formatting per model.

6. Run Scheduled Jobs Overnight

Log reviews, documentation updates, backups, inbox sorting, customer relationship management syncs, security scans: anything you do regularly should be a scheduled job.

Run them during off-hours when you're not actively using OpenClaw. This prevents scheduled work from competing with live usage for token quota. Space jobs out so they don't all fire at once.

You wake up to finished work instead of a to-do list.

7. Log Everything Your Agent Does

Tell OpenClaw to keep a record of every action, error, and decision. Simple log files work fine and take almost no disk space.

Every morning, ask: "Check last night's logs, find any errors, and suggest fixes." OpenClaw reads its own history, diagnoses the problem, and tells you what to address. You don't need to understand the underlying code.

When something goes wrong, logs turn a mystery into a 30-second fix.

8. Harden Security With Multiple Layers

OpenClaw connects to your email, files, and apps. That access needs protection.

  • Inbound text filtering: Scan incoming content for prompt injection phrases before they reach your agent.
  • Model-powered review: Use a strong model as a second layer to catch anything the filter missed and quarantine suspicious content.
  • Outbound redaction: Before anything gets sent out via Slack, email, or anywhere else, automatically strip personal information, phone numbers, and secrets.
  • Minimum permissions: Give OpenClaw only the exact access it needs. Read email but not send. Read files but not delete.
  • Approval gates: Any destructive action requires your sign-off first.
  • Spending limits: Rate caps and budget limits prevent runaway loops from burning through your quota.

Always run openclaw in a VPS like hetzner or cloud StartClaw that puts OpenClaw on a dedicated cloud instance, isolated from your personal machine. Connect Telegram, WhatsApp, or Discord from a single dashboard, pick your model, and your agent is running in under 60 seconds. No Docker, no server provisioning, no configuration files. The instance stays current without you managing it.

+1 only used verified skills.

9. Document How Your System Works

The more context OpenClaw has about your setup, the less it guesses. Build and maintain:

  • A product doc explaining what features you've built and how they work.
  • Workflow docs describing your regular processes step by step.
  • A file map showing how everything is organized.
  • A learnings file where mistakes get logged so they don't repeat.
  • Prompting guides for each model you use.

Set up a daily job that reviews your docs against your actual system and fills in gaps automatically.

10. Use Your Subscription Instead of the Application Programming Interface

Paying per application programming interface call adds up fast. A flat Claude or ChatGPT subscription usually costs far less at the same volume.

For Claude models: connect through the Agents software development kit, within Anthropic's terms of service. For OpenAI models: connect through the Codex OAuth.

If the setup isn't obvious, just ask OpenClaw to configure it.

11. Batch Your Notifications

Scheduled jobs running throughout the day will bury you in pings if you're not careful. A tiered system helps:

  • Low priority: collect and summarize in a digest every few hours.
  • Medium priority: summarize hourly.
  • Critical alerts (system down, security issues): bypass batching and notify immediately.

Stay informed without getting interrupted every five minutes.

12. Use a Coding Tool to Build, a Chat App to Use

Telegram, WhatsApp, and Discord work well for day-to-day interaction. But when modifying code or building new features, switch to a proper development environment like Cursor or Claude Code.

Development tools are built for reading and editing code. Chat apps aren't. Build in the right tool, use in the right tool.

reddit.com
u/Veronildo — 7 days ago