
▲ 9 r/PaperClip_AI
I kept seeing people here run into the same thing: Claude/API limits, token cost, and wanting to use OpenRouter, Qwen, GLM, local proxies, etc. without fighting the whole setup.
So I made a small adapter:
https://github.com/codeg-dev/paperclip-adapter-custom-llm-local
Install in Paperclip:
Settings → Adapters → Install Adapter
paperclip-adapter-custom-llm-local
Then use:
{
"adapterType": "custom_llm_local",
"adapterConfig": {
"model": "your-model-id",
"baseUrl": "http://127.0.0.1:8317/v1",
"transport": "openai_chat_completions",
"apiKeyEnv": "LOCAL_LLM_API_KEY"
}
}
It supports OpenAI Chat Completions-compatible endpoints and Anthropic Messages-compatible endpoints.
The main point is simple: if you already have a local/proxy LLM endpoint, Paperclip agents can call it directly without going through a full CLI adapter.
Would love to hear what local/proxy setups people are running with Paperclip.
u/EffectiveWay8897 — 19 days ago