u/Ecstatic-Use-1353

OpenClaw inside Ollama Docker: simpler networking, brutal RAM usage

I put OpenClaw inside the Ollama container to avoid host access/networking issues. It works, but RAM usage is brutal

I tried this setup for one specific reason:

I did not want OpenClaw running in a separate container and needing access back to the host machine just to reach Ollama.

Most Docker setups put OpenClaw and Ollama in separate places:

- Ollama on the host and OpenClaw in Docker

- Ollama in one container and OpenClaw in another container

- OpenClaw reaching Ollama through `host.docker.internal`

- OpenClaw reaching Ollama through a Docker network hostname

- OpenClaw needing extra host/network configuration

That works, but it adds friction and can expand what OpenClaw needs to reach.

In this setup, I do the opposite:

- start from the official `ollama/ollama` Docker image

- install OpenClaw inside that same container

- let OpenClaw talk to Ollama through `127.0.0.1:11434`

- expose only the ports I need from the container

The main benefit is simple:

OpenClaw does not need to call back into the host machine to talk to Ollama. The model endpoint is local inside the same container.

This is not a full security-hardening guide, but it keeps the setup more contained and avoids a lot of the usual Docker networking confusion around `host.docker.internal`, container hostnames, and Ollama bind addresses.

The tradeoff:

RAM usage can get heavy very quickly. OpenClaw prompts can be large, and small local models may struggle with context/tool use. So this setup is cleaner from a networking/container isolation perspective, but it is not magically lightweight.

## What this setup gives you

- Ollama running in Docker

- OpenClaw installed inside the same Ollama container

- GPU support enabled through Docker

- persistent Ollama model storage

- local Qwen models pulled through Ollama

- OpenClaw gateway running on port `18789`

- OpenClaw dashboard available through the gateway

- no `host.docker.internal` needed for OpenClaw to reach Ollama

Local services:

- Ollama API: `http://localhost:11434\`

- OpenClaw gateway/dashboard: `http://localhost:18789\`

## 1. Start the Ollama container from the host

Run this in PowerShell or your host terminal.

This creates the container, mounts persistent Ollama storage, enables GPU support, and opens ports `11434` and `18789`.

```bash

docker run -d \

--name ollamaopenclaw \

--gpus=all \

-v ollama_docker:/root/.ollama \

-p 11434:11434 \

-p 18789:18789 \

ollama/ollama

If you do not want the ports exposed on all host interfaces, bind them to localhost instead:

docker run -d \

--name ollamaopenclaw \

--gpus=all \

-v ollama_docker:/root/.ollama \

-p 127.0.0.1:11434:11434 \

-p 127.0.0.1:18789:18789 \

ollama/ollama

  1. Open a shell inside the container

docker exec -it ollamaopenclaw sh

  1. Install OpenClaw inside the Ollama container

Run this inside the container.

apt-get update && apt-get install -y curl git bash ca-certificates

curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install-cli.sh | bash

export PATH="$HOME/.openclaw/bin:$PATH"

Check that OpenClaw is available:

openclaw --version

  1. Pull Ollama models

Run this inside the container.

Use whichever model fits your hardware. I tested with small Qwen models first because the goal was to verify the setup.

ollama pull qwen3.5:0.8b

ollama pull qwen3.5:2b

ollama pull qwen3.5:4b

Check that Ollama sees the models:

ollama list

  1. Configure OpenClaw to use the local gateway

Run this inside the container.

export OLLAMA_API_KEY="ollama-local"

openclaw config set gateway.bind lan

openclaw config set gateway.port 18789

openclaw config set gateway.controlUi.allowedOrigins '["http://localhost:18789","http://127.0.0.1:18789"\]' --strict-json

  1. Start the OpenClaw gateway

Run this inside the same container shell.

Important: this terminal stays open. Do not close it while using the gateway.

openclaw gateway run --bind lan --port 18789 --allow-unconfigured

  1. Open a second shell inside the same container

Open a second terminal/PowerShell window on the host and run:

docker exec -it ollamaopenclaw sh

Then set the OpenClaw path again:

export PATH="$HOME/.openclaw/bin:$PATH"

export OLLAMA_API_KEY="ollama-local"

  1. Run OpenClaw onboarding

Because OpenClaw and Ollama are inside the same container, the Ollama base URL is:

http://127.0.0.1:11434

Do not use:

http://host.docker.internal:11434

And do not use the OpenAI-compatible /v1 endpoint unless you specifically know you need it:

http://127.0.0.1:11434/v1

Use the model you want.

Small model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:0.8b" \

--accept-risk

Medium model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:2b" \

--accept-risk

Larger model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:4b" \

--accept-risk

  1. Open the dashboard

Run:

openclaw dashboard

Open the URL it prints.

Expected local access:

http://localhost:18789

Useful checks

Check running containers:

docker ps

Check container logs:

docker logs ollamaopenclaw

Enter the container again:

docker exec -it ollamaopenclaw sh

Check Ollama models:

ollama list

Check OpenClaw version:

openclaw --version

Check that Ollama responds from inside the container:

curl http://127.0.0.1:11434/api/tags

Restart

If the container is stopped:

docker start ollamaopenclaw

Then enter it again:

docker exec -it ollamaopenclaw sh

Re-export the path:

export PATH="$HOME/.openclaw/bin:$PATH"

Restart the gateway:

openclaw gateway run --bind lan --port 18789 --allow-unconfigured

Stop and remove

Stop the container:

docker stop ollamaopenclaw

Remove the container:

docker rm ollamaopenclaw

The Ollama models remain in the Docker volume:

ollama_docker

If you also want to remove the model volume:

docker volume rm ollama_docker

Notes and tradeoffs

This setup is mainly about containment and simpler networking.

It avoids the common situation where OpenClaw has to reach back into the host or across containers just to talk to Ollama.

Instead:

OpenClaw → 127.0.0.1:11434 → Ollama

all inside the same container.

But there are tradeoffs:

RAM usage can be high.

OpenClaw prompts can be large.

Small local models may struggle with tool use.

Larger models need serious RAM/VRAM.

The gateway terminal must stay running.

This is not a production hardening guide.

Do not expose 18789 publicly without authentication, firewalling, or a secure tunnel/VPN.

If you want a cleaner long-term deployment, a proper Docker Compose setup with separate services may still be better. But for local testing, this one-container approach avoids a lot of host/networking confusion.

reddit.com
u/Ecstatic-Use-1353 — 3 days ago

OpenClaw inside Ollama Docker: simpler networking, brutal RAM usage

I put OpenClaw inside the Ollama container to avoid host access/networking issues. It works, but RAM usage is brutal

I tried this setup for one specific reason:

I did not want OpenClaw running in a separate container and needing access back to the host machine just to reach Ollama.

Most Docker setups put OpenClaw and Ollama in separate places:

- Ollama on the host and OpenClaw in Docker

- Ollama in one container and OpenClaw in another container

- OpenClaw reaching Ollama through `host.docker.internal`

- OpenClaw reaching Ollama through a Docker network hostname

- OpenClaw needing extra host/network configuration

That works, but it adds friction and can expand what OpenClaw needs to reach.

In this setup, I do the opposite:

- start from the official `ollama/ollama` Docker image

- install OpenClaw inside that same container

- let OpenClaw talk to Ollama through `127.0.0.1:11434`

- expose only the ports I need from the container

The main benefit is simple:

OpenClaw does not need to call back into the host machine to talk to Ollama. The model endpoint is local inside the same container.

This is not a full security-hardening guide, but it keeps the setup more contained and avoids a lot of the usual Docker networking confusion around `host.docker.internal`, container hostnames, and Ollama bind addresses.

The tradeoff:

RAM usage can get heavy very quickly. OpenClaw prompts can be large, and small local models may struggle with context/tool use. So this setup is cleaner from a networking/container isolation perspective, but it is not magically lightweight.

## What this setup gives you

- Ollama running in Docker

- OpenClaw installed inside the same Ollama container

- GPU support enabled through Docker

- persistent Ollama model storage

- local Qwen models pulled through Ollama

- OpenClaw gateway running on port `18789`

- OpenClaw dashboard available through the gateway

- no `host.docker.internal` needed for OpenClaw to reach Ollama

Local services:

- Ollama API: `http://localhost:11434\`

- OpenClaw gateway/dashboard: `http://localhost:18789\`

## 1. Start the Ollama container from the host

Run this in PowerShell or your host terminal.

This creates the container, mounts persistent Ollama storage, enables GPU support, and opens ports `11434` and `18789`.

```bash

docker run -d \

--name ollamaopenclaw \

--gpus=all \

-v ollama_docker:/root/.ollama \

-p 11434:11434 \

-p 18789:18789 \

ollama/ollama

If you do not want the ports exposed on all host interfaces, bind them to localhost instead:

docker run -d \

--name ollamaopenclaw \

--gpus=all \

-v ollama_docker:/root/.ollama \

-p 127.0.0.1:11434:11434 \

-p 127.0.0.1:18789:18789 \

ollama/ollama

  1. Open a shell inside the container

docker exec -it ollamaopenclaw sh

  1. Install OpenClaw inside the Ollama container

Run this inside the container.

apt-get update && apt-get install -y curl git bash ca-certificates

curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install-cli.sh | bash

export PATH="$HOME/.openclaw/bin:$PATH"

Check that OpenClaw is available:

openclaw --version

  1. Pull Ollama models

Run this inside the container.

Use whichever model fits your hardware. I tested with small Qwen models first because the goal was to verify the setup.

ollama pull qwen3.5:0.8b

ollama pull qwen3.5:2b

ollama pull qwen3.5:4b

Check that Ollama sees the models:

ollama list

  1. Configure OpenClaw to use the local gateway

Run this inside the container.

export OLLAMA_API_KEY="ollama-local"

openclaw config set gateway.bind lan

openclaw config set gateway.port 18789

openclaw config set gateway.controlUi.allowedOrigins '["http://localhost:18789","http://127.0.0.1:18789"\]' --strict-json

  1. Start the OpenClaw gateway

Run this inside the same container shell.

Important: this terminal stays open. Do not close it while using the gateway.

openclaw gateway run --bind lan --port 18789 --allow-unconfigured

  1. Open a second shell inside the same container

Open a second terminal/PowerShell window on the host and run:

docker exec -it ollamaopenclaw sh

Then set the OpenClaw path again:

export PATH="$HOME/.openclaw/bin:$PATH"

export OLLAMA_API_KEY="ollama-local"

  1. Run OpenClaw onboarding

Because OpenClaw and Ollama are inside the same container, the Ollama base URL is:

http://127.0.0.1:11434

Do not use:

http://host.docker.internal:11434

And do not use the OpenAI-compatible /v1 endpoint unless you specifically know you need it:

http://127.0.0.1:11434/v1

Use the model you want.

Small model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:0.8b" \

--accept-risk

Medium model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:2b" \

--accept-risk

Larger model:

openclaw onboard --non-interactive \

--auth-choice ollama \

--custom-base-url "http://127.0.0.1:11434" \

--custom-model-id "qwen3.5:4b" \

--accept-risk

  1. Open the dashboard

Run:

openclaw dashboard

Open the URL it prints.

Expected local access:

http://localhost:18789

Useful checks

Check running containers:

docker ps

Check container logs:

docker logs ollamaopenclaw

Enter the container again:

docker exec -it ollamaopenclaw sh

Check Ollama models:

ollama list

Check OpenClaw version:

openclaw --version

Check that Ollama responds from inside the container:

curl http://127.0.0.1:11434/api/tags

Restart

If the container is stopped:

docker start ollamaopenclaw

Then enter it again:

docker exec -it ollamaopenclaw sh

Re-export the path:

export PATH="$HOME/.openclaw/bin:$PATH"

Restart the gateway:

openclaw gateway run --bind lan --port 18789 --allow-unconfigured

Stop and remove

Stop the container:

docker stop ollamaopenclaw

Remove the container:

docker rm ollamaopenclaw

The Ollama models remain in the Docker volume:

ollama_docker

If you also want to remove the model volume:

docker volume rm ollama_docker

Notes and tradeoffs

This setup is mainly about containment and simpler networking.

It avoids the common situation where OpenClaw has to reach back into the host or across containers just to talk to Ollama.

Instead:

OpenClaw → 127.0.0.1:11434 → Ollama

all inside the same container.

But there are tradeoffs:

RAM usage can be high.

OpenClaw prompts can be large.

Small local models may struggle with tool use.

Larger models need serious RAM/VRAM.

The gateway terminal must stay running.

This is not a production hardening guide.

Do not expose 18789 publicly without authentication, firewalling, or a secure tunnel/VPN.

If you want a cleaner long-term deployment, a proper Docker Compose setup with separate services may still be better. But for local testing, this one-container approach avoids a lot of host/networking confusion.

reddit.com
u/Ecstatic-Use-1353 — 5 days ago

Hermes Agent Docker setup with local Dashboard + Web UI

Hermes Agent Docker setup with local Dashboard + Web UI

This is a practical Docker guide for running Hermes Agent locally with both the management dashboard and the Web UI/chat.

The goal is to get a complete local Hermes setup running without installing everything directly on your machine.

With this setup you get:

- Hermes Agent running in Docker

- Hermes Gateway API

- local management dashboard

- local Web UI/chat

- persistent Docker volumes

- localhost-only access for safer local testing

After the setup, you will use:

- Web UI/chat: http://localhost:8787

- Dashboard: http://localhost:9119

- Gateway: http://localhost:8642

This is useful if you want to test Hermes workflows, agents, tools, or integrations from a browser while keeping the runtime isolated inside Docker.

## 1. Create Docker network and volumes

```bash

docker network create hermes-net

docker volume create hermes_data

docker volume create hermes-agent-src

docker volume create hermes_workspace

  1. Pull images

docker pull nousresearch/hermes-agent:latest

docker pull ghcr.io/nesquena/hermes-webui:latest

  1. Run the initial Hermes setup

docker run -it --rm \

--network hermes-net \

-v hermes_data:/opt/data \

nousresearch/hermes-agent:latest \

setup

  1. Start Hermes Gateway

docker run -d \

--name hermes \

--restart unless-stopped \

--network hermes-net \

-v hermes_data:/opt/data \

-v hermes-agent-src:/opt/hermes \

-p 127.0.0.1:8642:8642 \

nousresearch/hermes-agent:latest \

gateway run

  1. Start the management dashboard

docker run -d \

--name hermes-dashboard \

--restart unless-stopped \

--network hermes-net \

-v hermes_data:/opt/data \

-p 127.0.0.1:9119:9119 \

-e GATEWAY_HEALTH_URL=http://hermes:8642 \

nousresearch/hermes-agent:latest \

dashboard --host 0.0.0.0 --insecure

  1. Start the real Web UI/chat

docker run -d \

--name hermes-webui \

--restart unless-stopped \

--network hermes-net \

-v hermes_data:/home/hermes-webui/.hermes \

-v hermes-agent-src:/home/hermes-webui/.hermes/hermes-agent \

-v hermes_workspace:/workspace \

-e HERMES_WEBUI_STATE_DIR=/home/hermes-webui/.hermes/webui \

-e WANTED_UID=10000 \

-e WANTED_GID=10000 \

-p 127.0.0.1:8787:8787 \

ghcr.io/nesquena/hermes-webui:latest

  1. Open Hermes

Web UI/chat:

http://localhost:8787

Management dashboard:

http://localhost:9119

Gateway:

http://localhost:8642

Useful checks

Check running containers:

docker ps

Check logs:

docker logs hermes

docker logs hermes-dashboard

docker logs hermes-webui

Restart everything:

docker restart hermes hermes-dashboard hermes-webui

Stop everything:

docker stop hermes hermes-dashboard hermes-webui

Remove containers if you want to recreate them:

docker rm hermes hermes-dashboard hermes-webui

Notes

Ports are bound to 127.0.0.1, so this is intended for local access only.

The dashboard uses --insecure, so do not expose it publicly.

Data is persisted in Docker volumes:

hermes_data

hermes-agent-src

hermes_workspace

If you want remote access, put it behind a VPN, tunnel, or reverse proxy with proper authentication instead of exposing these ports directly.

Hope this helps anyone trying to get Hermes running locally with both dashboard and Web UI.

reddit.com
u/Ecstatic-Use-1353 — 5 days ago

I made an unofficial Agency Agents connector for Hermes Agent

I made a small unofficial connector to test Agency Agents inside Hermes Agent.

It is intentionally simple:

- one hermes-adapter.zip

- two plain-text prompts

- designed for normal Hermes chat/GUI usage

- no CLI workflow required for the end user

Repo:

https://github.com/reventadirecta/hermes-agent-connector-agency-agents

This is not an official Hermes project and not a full fork of Agency Agents. It is just an integration package built around the MIT-licensed Agency Agents project.

The goal is to reduce setup friction and let Hermes use Agency Agents as a specialist library for multi-agent workflows.

I’m looking for technical feedback from Hermes users:

- Does this integration approach make sense?

- Is ZIP + two prompts acceptable, or should this become a proper Hermes extension/plugin?

- What would need to change for it to fit Hermes better?

github.com
u/Ecstatic-Use-1353 — 5 days ago