u/somratpro

▲ 41 r/nousresearch+1 crossposts

Hi all,

If you’ve been looking for a way to run the Hermes Agent by Nous Research without paying for hosting or keeping your local machine on 24/7, check this out.

I created a repository that lets you deploy the full Hermes core (with its self-improving learning loop) directly to a Hugging Face Space using the Docker SDK. It’s a great way to have a "digital employee" running in the background for free.

Repo: https://github.com/somratpro/huggingmes
Setup Guide: https://www.youtube.com/watch?v=kagB1ID-NtE

u/somratpro — 10 days ago
▲ 3 r/u_somratpro+2 crossposts

https://preview.redd.it/y9giodx3wcyg1.png?width=1729&format=png&auto=webp&s=e2b66cc1f5e88d9af1ba9576dca1afe15435a068

Hey everyone,

I’ve been a developer for 8 years and noticed how frustrating it is to host personal automation agents. Even with a cheap VPS, you often still need to deal with domain setups and technical hurdles that aren't beginner-friendly.

I found that Hugging Face offers a generous free tier (2vCPU and 16GB RAM), so I built a series of open-source tools to help you utilize it easily:

These projects are completely free and open-source. They bypass the need for a monthly VPS subscription or a custom domain.

reddit.com
u/somratpro — 14 days ago
▲ 33 r/huggingface+1 crossposts

Hi n8n community!

I’ve seen a lot of people asking about low-cost or free hosting options lately. I just finished a repository that makes it super simple to run n8n on Hugging Face Spaces via Docker. And I name it Hugging8n

The Setup:

  1. Duplicate the Space.
  2. Set your HF_TOKEN and CLOUDFLARE_WORKERS_TOKEN.

Code/Repo:https://github.com/somratpro/Hugging8N

Video Walkthrough:https://youtu.be/cfMruo5dlF8

I’ve been running my personal workflows this way for a while, and it's surprisingly stable. If you’re looking for a sandbox or a way to host small-scale automations without a VPS, give it a shot!

reddit.com
u/somratpro — 18 days ago
▲ 28 r/huggingface+1 crossposts

Hey folks,

I just released HuggingClaw, a Docker template that lets you run a persistent AI assistant on HuggingFace Spaces with zero infrastructure costs.

The Problem

* Most AI assistants are cloud-first and expensive: You're stuck using someone else's API with latency and rate limits.

* Management Overhead: Running your own usually requires managing servers, crons, and backups.

What HuggingClaw Does

* Model Agnostic: Works with any LLM (Claude, GPT-4, Gemini, etc.).

* Always-On Access: Connects via Telegram.

* Built-in Backups: Auto-syncs your workspace to HF Datasets.

* Reliability: Built-in keep-alive so HF Spaces doesn't sleep on you.

* Self-Contained: Zero external dependencies—runs entirely on HF infrastructure.

Quick Setup

* Duplicate the Space on HF.

* Add 2 secrets (API key + Telegram token).

* Done. It runs forever.

Features

* Automatic workspace backup + restore

* Telegram integration with multi-user support

* Health monitoring (/health endpoint)

* Graceful shutdown (saves workspace before exit)

* Version pinning for stability

GitHub: https://github.com/somratpro/HuggingClaw

Docs are in the README with a full config reference. This is built on OpenClaw, so you get all the agent/MCP tooling out of the box.

Feedback welcome—this is still v1, so let me know what breaks or what features would help!

u/somratpro — 18 days ago