Looking for testers for Cagebox — managed hosting for self-hosted AI agents (Hermes / OpenClaw), free during beta
Hey folks,
I'm building Cagebox - a managed service that lets you run your own Hermes Agent or OpenClaw agent 24/7 without renting a VPS or doing the Linux/webhook/TLS dance yourself.
What it is, in one paragraph: you go through a short web onboarding, pick the agent (Hermes or OpenClaw), plug in your own LLM provider key, and a minute later you have a persistent agent that keeps its memory, reconnects to your messengers, and stays reachable. Each agent runs in its own Firecracker microVM with a private kernel - same isolation primitive AWS Lambda uses - so your agent is hardware-isolated from everyone else's.
What's in the dashboard today:
- Web terminal into the VM (xterm.js / ttyd)
- File explorer for the agent's /data (configs, memory, artifacts)
- Per-agent settings + encrypted secrets for messenger tokens / API keys
- Live status, restart, snapshot
What I'm looking for: first real users who'd actually run an agent on this and tell me where it hurts. Completely free during beta - you only pay your own LLM provider (OpenAI / Anthropic / OpenRouter / Opencode).
Especially curious to hear from you if:
- you've already tried to self-host Hermes or OpenClaw and bounced off the setup,
- you want a personal Telegram/Discord agent that doesn't go down when your laptop sleeps,
- you're running multi-agent experiments and want a few isolated instances without spinning up VMs by hand.
Drop a comment or DM me with what you'd want to run on it - I'll send onboarding invites next week. Honest critique very welcome, "this is a bad idea because X" is more useful to me than polite upvotes.