r/n8n_ai_agents

Which mistakes can really kill first time n8n builders?

Im kinda new to n8n and before getting into this journey, I was spending most of the on YouTube trying to figure out what’s the best thing to and what to learn first. But when it comes to mistake it was always a mix of things, so I just wanna get things clear.

I would appreciate you’re help.

reddit.com
u/AlternativePea9456 — 13 hours ago
▲ 6 r/n8n_ai_agents+4 crossposts

Been digging through the Hermes Agent skills hub this week trying to understand where the ecosystem is and where it's going.

Hermes already has a solid execution layer — 70+ skills across a bunch of categories. What feels missing is a strong knowledge layer. Memory is handled natively, but structured external knowledge (research bases, domain-specific info, niche datasets) is still pretty thin across community skills. One standout I found is llm-wiki-compiler by AtomicMem. It’s a clean example of what a proper knowledge skill could look like:

  • citations per paragraph
  • built-in quality checks
  • semantic search
  • Obsidian integration
  • multi-provider support
  • agents can read/write via MCP

What’s interesting is it’s designed to be forked and adapted, not rebuilt from scratch.Feels like a lot of domains are still open here:

  • legal
  • medical
  • finance
  • niche communities
  • client-specific systems
  • dev docs

Given how the skills hub is growing, early knowledge-layer builds in these areas seem underexplored.

reddit.com
u/Final_Elevator_1128 — 8 hours ago
▲ 15 r/n8n_ai_agents+1 crossposts

Just wrapped my first international client project. Here's what I built and what I learned.

My client runs a high-end car rental service with 40+ active rental customers.

His entire operation was running on manual reminders, spreadsheets, and WhatsApp messages he had to send himself. Every. Single. Week.

The problems he was dealing with:

❌ Customers not paying weekly rent on time

❌ Manually sending reminders to 40+ people every week

❌ Checking payments, then manually updating spreadsheets

❌ No payment history stored anywhere

❌ All data management done by hand

It was eating hours of his time. Every week. And mistakes were inevitable.

So I built him an AI-powered WhatsApp automation system that acts like a full-time employee.

Here's what the system does:

  1. Automated Weekly Reminders

The bot sends payment reminders to every customer automatically. No manual work required.

When a customer pays and sends a screenshot, the bot:

Detects the payment

Updates the payment sheet

Notifies my client instantly

My client just replies "Confirmed" and the system logs it as verified.

  1. Smart Early Payment Logic

If a customer pays twice in the same week (early payment for next week), the system recognizes it and skips sending them a reminder the following week.

No duplicate messages. No confusion.

  1. Natural Language Database Control

My client can now talk to the bot in plain English:

"Add a new customer."

"Update John's payment status."

"Show me this week's pending payments."

The bot handles it all adds, updates, deletes, and retrieves data from the database on command.

  1. Two-Way Customer Communication

He can receive and reply to customer messages directly through the bot no third-party WhatsApp tools needed.

Everything runs through one system. Clean. Simple. Effective.

The result?

✅ 40+ weekly reminders sent automatically

✅ Payment tracking happens in real-time

✅ Full payment history stored and accessible

✅ Hours of manual work eliminated every single week

The whole system runs 24/7 witthout any manual intervention.

What I learned building this:

This wasn't just about connecting a few tools and calling it done.

I ran into bugs I didn't expect. Edge cases that broke the logic. Moments where I had to dig deep into how WhatsApp Business API actually works.

But I figured it out. I shipped it. And it works.

This project taught me more in one week than months of tutorials ever could.

Now I'm ready for the next one.

If you're running a business with repetitive manual processes eating your time there's probably a way to automate it.

u/Character-Ad-8784 — 12 hours ago

Looking for a sales partner - I build n8n automations, you bring clients, we split revenue

Hey,

I'm an automation engineer. I've built production n8n systems for real clients including AI-powered recruitment pipelines, sales intelligence systems, cold call coaching automation, invoice reconciliation that matches bank statements to invoices automatically, YouTube content factories, appointment scheduling systems, AI calling agents, lead generation pipelines, email campaign systems with automated follow-ups, social media posting automation, and UGC content generation systems.

The problem is I'm a builder, not a salesperson. I can automate almost any business process but I genuinely don't enjoy cold outreach and closing deals.

Here's what I'm proposing. A simple partnership. You handle sales, I handle delivery. We split revenue on every project you close. I'll also give you target markets, lead sources, and help you understand exactly what problems we're solving so you can pitch confidently without needing to know how any of it works technically.

If you're someone who's good at sales but doesn't have a strong technical product behind you, this could work well for both of us.

DM me if you want to have a straightforward conversation about whether this makes sense.

reddit.com
u/StatisticianLimp510 — 17 hours ago

Built an AI Workflow that Turns Long Videos into Viral Shorts Automatically (n8n + Whisper + Gemini)

I’ve been testing a pipeline to repurpose long-form videos into short-form content, and finally got something working end-to-end inside n8n.

The idea:
Upload 1 long video → get multiple ready-to-post shorts.

How the workflow works

1. Upload video

  • Simple form trigger
  • Accepts any long video

2. Audio extraction + transcription

  • FFmpeg extracts audio
  • Whisper generates full transcript with word-level timestamps

3. AI selects viral moments

  • Gemini analyzes:
    • full transcript
    • word timestamps
  • Picks 3–15 high-retention clips (15–60s each)
  • Returns exact start/end timestamps (very important for accuracy)

4. Clip generation

  • FFmpeg auto-cuts clips using timestamps
  • Crops to vertical (9:16)
  • Adds proper encoding for social platforms

5. Auto scheduling

  • Shorts are automatically scheduled to:
    • TikTok
    • Instagram
    • YouTube Shorts
  • Posted daily (one per day)

What was harder than expected

  • Getting accurate timestamps (word-level matters a LOT)
  • Handling async jobs (FFmpeg processing loops)
  • Making sure clips don’t cut mid-sentence
  • Forcing AI to return clean JSON (this took time)

What I like about this setup

  • Fully automated once triggered
  • No manual editing
  • Reuses long content efficiently
  • Scales content output easily

Workflow (for anyone curious)

reddit.com
u/Ordinary_1111 — 22 hours ago

Stopped n8n from holding real API credentials entirely — proxy boundary pattern (Lark + OpenAI, finance workflow)

Been running invoice OCR + Lark approval automation on n8n for a few months. The workflow itself works fine, but credential hygiene was a mess. Finally solved it structurally — sharing the pattern.

The core problem: Export workflow JSON → real sk-proj-... and Lark tenant_access_token travel with it. Hand it to a teammate, put it in version control, or give it to an AI coding tool — credentials leak. Even without hardcoding, the "fetch token first" node pattern means re-running any branch in isolation breaks the chain.

How the pattern works

1. Register each API as a named service in a proxy layer

  • Real credentials loaded from local env at registration time
  • Proxy holds them — n8n never sees them

2. Replace every direct API call in n8n with a proxy URL

  • https://<proxy>/s/api-lark-bot/open-apis/... instead of calling Lark directly
  • https://<proxy>/s/openai/v1/chat/completions instead of OpenAI directly

3. Store the proxy token as an n8n Header Auth Credential

  • Workflow JSON only carries credential id, never the real token
  • One rotating token for all downstream APIs

4. Delete the get-lark-token prefetch node

  • Token refresh happens inside the proxy layer
  • n8n doesn't know or care about Lark's 2h TTL

5. Grep before every commit

  • Two-layer check: redaction script + grep scan for known prefixes (sk-, cli_, Bearer )
  • Catches the case where someone pastes a token directly into a node header

What was harder than expected

  • The proxy token itself can still leak into JSON if you forget to use n8n's Credential store and paste it directly into a header — same problem, one layer up
  • Lark's tenant_access_token refresh timing: if the proxy caches the token and it expires mid-workflow, you get a 99991663 error. An IF node that retries once is enough
  • PDF → GPT-4o Vision requires image conversion first — pdf-to-png isn't in n8n's default Code node sandbox, need NODE_FUNCTION_ALLOW_EXTERNAL=pdf-to-png
  • Getting the proxy URL format right for path-auth APIs (Lark's endpoint structure is not RESTful in the obvious way)

What I like about this setup

  • AI tools can read and edit workflow JSON safely — no credentials to leak
  • Month-end audit is one place: proxy call log, not Lark + OpenAI + n8n stitched by hand
  • Token rotation happens once at the proxy, not across every workflow that uses the API
  • ~3 hours/week recovered on the finance side (teammate no longer forwards me invoices to type in manually)

Workflow (for anyone curious)

Four key nodes: Form Trigger (invoice upload) → GPT-4o OCR via proxy → human review step → Create Lark Approval via proxy.

Happy to share specifics on any node if useful.

reddit.com
u/AzureSu — 19 hours ago
▲ 9 r/n8n_ai_agents+1 crossposts

Need honest advice on the best career path in AI automation — looking for guidance and work opportunities

I’m being very direct here because I’m genuinely looking for honest advice and practical guidance.

I’m currently in a situation where I need to find work as soon as possible, but I also don’t want to take a random path that takes me away from the career I actually want.

My background is in AI automation and workflow building. I’ve worked on automation projects using n8n, including a blog automation system for a freelance client and a lead generation automation pipeline in my current company. I understand how to build workflows, connect tools, automate processes, and solve business problems through automation.

At the same time, I want to be honest about where I stand: I do not come from a strong coding background. I have basic Python knowledge, but I don’t know coding in depth, and I’ve never really enjoyed coding as a main career path. My interest is more in AI automation, workflow systems, business process automation, and using tools to create real business value.

That creates a dilemma for me.

I’m trying to figure out the smartest path from here:

  • Should I continue focusing on AI automation roles and keep building my stack?
  • Should I join an agency or small team to get more real-world exposure?
  • Should I spend serious time learning coding deeply even though I’m not naturally drawn to it?
  • Or should I choose a role that keeps me close to automation, business systems, and revenue impact, even if it’s not the exact “AI automation engineer” title?

What I’m really looking for is:

  • Honest mentorship from people who understand this field.
  • Advice on what kind of job would actually be best for my long-term growth.
  • Guidance on whether I should prioritize learning more coding, or focus more on business-facing automation roles.
  • Any work opportunities, freelance projects, or referrals where my current automation skills could be useful.
reddit.com
u/No-Lion-9705 — 1 day ago

My experience self hosting n8n more debugging than building

I got tired of setting up n8n on a VPS every time…

Not sure if it’s just me, but it always felt way more painful than it should be.

Same cycle every single time:

• Spin up a server

• Install Docker

• Configure SSL

• Set up domain

• Fix random issues

• Pray it works

And when something breaks…

You’re back to debugging infra instead of building workflows.

That honestly killed the whole experience for me.

What made it worse was hosting.

I used Hostinger and got locked into a yearly plan. Paid upfront… and then realized I wasn’t even happy with it.

Couldn’t switch easily. Just stuck.

So not only was setup painful I was committed to it.

So I tried fixing this for myself.

Built a simple way to deploy n8n in one click and server, SSL, domain, everything ready.

No DevOps. No setup loop.

And honestly… it felt different.

For the first time, I was just building workflows, not maintaining servers.

Letting a few people try it out for free.

Would really value honest feedback trying to do make this actually useful.

How are you guys hosting n8n?

Are you managing your own VPS, using something managed, or doing it another way?

u/cuebicai — 1 day ago

Looking for a sales partner - I build n8n automations, you bring clients, we split revenue

Hey,

I'm an automation engineer. I've built production n8n systems for real clients including AI-powered recruitment pipelines, sales intelligence systems, cold call coaching automation, invoice reconciliation that matches bank statements to invoices automatically, YouTube content factories, appointment scheduling systems, AI calling agents, lead generation pipelines, email campaign systems with automated follow-ups, social media posting automation, and UGC content generation systems.

The problem is I'm a builder, not a salesperson. I can automate almost any business process but I genuinely don't enjoy cold outreach and closing deals.

Here's what I'm proposing. A simple partnership. You handle sales, I handle delivery. We split revenue on every project you close. I'll also give you target markets, lead sources, and help you understand exactly what problems we're solving so you can pitch confidently without needing to know how any of it works technically.

If you're someone who's good at sales but doesn't have a strong technical product behind you, this could work well for both of us.

DM me if you want to have a straightforward conversation about whether this makes sense.

reddit.com
u/StatisticianLimp510 — 17 hours ago
▲ 4 r/n8n_ai_agents+1 crossposts

Made a workflow that auto chops long videos into TikTok/Reels/Square clips

Been manually cutting up long videos into shorts for way too long so I built something in n8n to do it for me.

You throw a video into a Google Drive folder and the workflow handles everything. Pulls the audio, transcribes it with Whisper, asks GPT to find the best moments, renders each clip in three aspect ratios through RenderIO (cloud FFmpeg), and drops them all back in Drive organized in folders. Logs everything to a Sheet too.

Runs completely hands off once you set it up. I just drop videos in and come back to find clips ready to post.

You need Google Drive, Sheets, OpenAI, and a free RenderIO account. Works on both n8n Cloud and self hosted.

Github links

u/bluebeel — 2 days ago
▲ 6 r/n8n_ai_agents+1 crossposts

Will AI coding agents eventually replace tools like n8n?

I've been thinking about this a lot recently and wanted to hear what the community thinks.

With the rise of AI coding tools and autonomous agents, it feels like we're moving toward a world where workflows can be defined directly in code (or even natural language), instead of using visual tools like n8n.

From my perspective:

  • AI coding tools seem to offer much higher flexibility and extensibility
  • They can potentially handle edge cases and error handling in a more dynamic way
  • You’re not limited by predefined nodes or integrations

On the other hand, n8n’s biggest advantage seems to be:

  • Visualization (you can clearly see and debug the flow)
  • Lower barrier for non-developers
  • Faster iteration for certain use cases

But here’s the part I’m really curious about:

If we combine AI coding with something like codebase visualization tools (e.g. “deep wiki”-style tools that map and explain code flows), wouldn’t that reduce or even eliminate n8n’s core advantage?

In that scenario, you’d have:

  • AI generating and maintaining the workflow
  • A visual layer explaining the logic
  • Full control via code when needed

Curious to hear how others are thinking about this.

reddit.com
u/Orlando_Wong — 4 days ago
▲ 2 r/n8n_ai_agents+1 crossposts

Need a workflow: check all Instagram DMs (including requests) for 15 accounts + auto‑share 3× daily posts to Stories

I manage around 15 Instagram accounts (business and personal) and it’s getting really hard to keep up with messages. The main problem is Instagram’s “Message Requests” – all messages from people who don’t follow you go there and never show up in your main inbox unless you accept them. So right now the only way I can check everything is opening each account and manually going into Message Requests, which is time‑consuming and easy to forget. I know there’s a paid app that lets you add unlimited Instagram accounts for about 50 dollars a month, but it only shows normal DMs you’ve already accepted, not the Message Requests. That’s why I’m stuck checking each account separately every day.

What I’m looking for is some kind of workflow or automation where:

  1. ll messages (both inbox and Message Requests) from all 15 accounts are checked automatically, and I get notified in one place (like Telegram or a dashboard) whenever there’s a new message.

2.For each account, every day, 5 existing posts from that account’s feed are randomly picked and automatically shared to Story 3 times a day (morning, afternoon, evening).

I’m open to cloud‑based tools, no‑code automation (Make, Zapier, etc.), or even an AI agent‑style setup as long as it doesn’t clearly break Instagram’s rules.

Has anyone built something like this, or is there a tool or workflow that comes close to what I’m describing?

reddit.com
u/soamjena — 1 hour ago
▲ 21 r/n8n_ai_agents+1 crossposts

Best Ollama model for n8n workflows (RAG, file handling, reasoning) + hardware requirements?

Hi everyone,

I’m currently building automation workflows using n8n with local LLMs via Ollama, and I’m trying to choose the most suitable model for production/company use.

My main use cases:

  • RAG (retrieval-augmented generation) with documents (PDFs, text, etc.)
  • File handling & structured data extraction
  • Reasoning tasks (not just simple chat)
  • Reliable JSON outputs for automation

Constraints:

  • Running locally on a physical server (not cloud)
  • Looking for a good balance between performance, speed, and accuracy

Questions:

  1. Which Ollama models would you recommend for these use cases? (e.g., LLaMA 3, Mistral, Mixtral, DeepSeek, etc.)
  2. Which models handle RAG + structured outputs best?
  3. What are the minimum and recommended hardware specs (RAM, GPU/CPU) for smooth performance in production?
  4. Any tips for optimizing n8n + Ollama workflows (latency, batching, etc.)?

I’d really appreciate feedback from anyone using this setup in real-world scenarios.

Thanks!

reddit.com
u/Tricky_Literature397 — 4 days ago
▲ 40 r/n8n_ai_agents+1 crossposts

You probably don't need to build a full RAG pipeline for most n8n agent workflows

You probably don't need to build a full RAG pipeline for most n8n agent workflows.

Most of the complexity — chunking, embeddings, vector search, query planning, reranking — exists to solve problems you might not have yet. If your goal is giving an n8n agent accurate context to make decisions, there's a shorter path.

There's a verified Pinecone Assistant node in n8n that handles the entire retrieval layer as a single node. I used it to build a workflow that answers questions about release notes mid-execution — no pipeline decisions required.

Here's how to try it yourself:

  1. Create an Assistant in the Pinecone console here.
  2. In n8n, open the nodes panel, search "Pinecone Assistant", and install it
  3. Import this workflow template by pasting this URL into the workflow editor: https://raw.githubusercontent.com/pinecone-io/n8n-templates/refs/heads/main/assistant-quickstart/assistant-quickstart.json
  4. Setup your Pinecone and OpenAI credentials — use Quick Connect or get a Pinecone API key here.
  5. Update the URLs in the Set file urls node to point at your own data, then execute to upload
  6. Use the Chat input node to query: "What support does Pinecone have for MCP?" or "Show me all features released in Q4 2025"

The template defaults to fetching from URLs but you can swap in your own, pull from Google Drive using this template, or connect any other n8n node as a data source.

Where this gets interesting beyond simple doc chat: wiring it into larger agent workflows where something needs to look up accurate context before deciding what to do next — routing, conditional triggers, automated summaries. Less "ask a question, get an answer" and more "agent consults its knowledge base and keeps moving."

What are you using it for? Curious whether people are keeping this simple or building it into more complex flows.

u/http418teapot — 3 days ago

Tips - How to build an AI agent to automate your emails

I have been playing around with email automation lately and wanted to share some ways to get this running: one where you can still do everything yourself and one way where you just want it running without the overhead.

Option 1: n8n

  1. Gmail trigger. Add a node for gmail trigger and watch for new incoming emails. During configuration disable simplified view in your agent gets the email body instead of a shortened email.
  2. Extract the email body content. Add a "Set" node that gets the email body text and thread ID variables. This just gives us those two variables easy to reference later in other "Set" nodes instead of having to copy/paste.
  3. Classify the email. Add an OpenAI node (GPT-4o mini) work well here and prompt the OpenAI to read the email and return the simple true/false, does this email represent a customer support (refund request, order status, etc.) or not? Tell it to respond in JSON format to simplify it for the next step to read properly.
  4. Branch on Logic. Add a Switch node. If the classification was true, route to the AI agent. If false, immediately route to a notification.
  5. Agent and knowledge base. This is the center of the entire package. The AI agent requires two things: a Pinecone vector store with your FAQs and company policies for it to look up the right answer, and one Gmail action to create a draft reply in the original email chain. It reads the email, queries your knowledge base and writes a draft. It's all reviewed prior to sending.
  6. Notifications. Add Telegram node to the end of the flow. Firing on new Draft added (Draft ready for your review) and non-support email (New email, not support-related).
  7. Test it. Send yourself a test email looking like a support request and observe the workflow: see if the draft lands in Gmail, and if the telegram message goes through.

Setup time total is a few hours in some instances (but mainly in the OAuth connections and putting your knowledge base into Pinecone).

Option 2: ZooClaw

And if setting up OAuth flows and using Pinecone, OpenAI keys sounds like more than you want to go, ZooClaw can go through the same workflow over a chat interface w/o all the node building.

Setup time is just one-time Google OAuth authorization so ZooClaw can access your Gmail, roughly 5 minutes:

  1. Go to Google Cloud Console and create a project/enable the Gmail API.
  2. On APIs 1and1 Services -> Credentials create an OAuth 2.0 Client ID (Desktop App type) - Download client_secret.json.
  3. Upload that file to ZooClaw. It runs the authorization command then offers you a url/link in your browser to click approve.

Then you just tell your ZooClaw assistant what you need, like: "Read my inbox every 30 mins. If it's a customer support email: Draft a reply from my FAQ doc, let me know on Telegram when done. Otherwise just a quick heads up is enough."

ZooClaw figures out the rest, reads your inbox, classifies the email, reads your knowledge documents, creates a draft reply, and sends the notification.

Both ways work fine. n8n allows you to see an overview of the whole workflow. Every step is explicit and auditable which is helpful if you are giving this to a team or need to debug to an issue. ZooClaw is faster to set up, you can see each step the assistant performs in real time and you can adjust it via conversation instead of having to go back into a flow editor.

reddit.com
u/Ready_Evidence3859 — 3 days ago

How to Use n8n + Claude Opus 4.7 to Post Viral TikTok Content from Reddit

Step 1 Scrape Reddit for trending content Set up an n8n Reddit node pointed at subreddits in your niche (r/LifeProTips, r/stories, r/AITA, etc.). Filter by "Hot" or "Top/24hr" so you're always pulling what's already proven to be engaging.

Step 2 Send posts to Claude Opus 4.7 for rewriting Pipe the Reddit titles and body text into an n8n HTTP node calling the Anthropic API with claude-opus-4-7. Prompt it to rewrite the content as a punchy, scroll-stopping TikTok script with a hook, 3 key points, and a CTA.

Step 3 Generate slide visuals Take Opus 4.7's output and send each point to a design API like Canva, Adobe Express, or a custom HTML-to-image renderer. Each slide = one bold sentence, clean background, large text.

Step 4 Stitch slides into a video Since TikTok only accepts video uploads (not carousels via API), use an n8n node to call FFmpeg or a service like Creatomate to combine your slides into a short MP4 with transitions and background music.

Step 5 Auto-post to TikTok Use n8n's TikTok node or an HTTP request to TikTok's Content Posting API to upload the finished video on a schedule: peak times like 7am, 12pm, and 7pm in your target timezone.

How do you make amazing designs? well opus is getting really good at that so i dont wanna babble on giving you a full guide here but if you really want to hear about it comment "design" and I'll send you my doc i made.

reddit.com
u/Weak-Neck-5126 — 4 days ago
▲ 25 r/n8n_ai_agents+1 crossposts

Built an AI agent that tells you whether an npm package is worth using (n8n + Firecrawl challenge)

I recently worked on the “Build the Ultimate Web Crawler Agent with Firecrawl” (March n8n challenge) and ended up building something pretty useful for dev workflows.

💡 The problem

If you’ve ever evaluated an npm package, you know the drill:

  • Check npm downloads
  • Open GitHub → stars, issues, commits
  • Look for activity / maintenance
  • Compare alternatives

Takes like 15–30 minutes per package

🚀 What I built

I created an AI-powered package evaluator that answers:

👉 “Should I use this package or not?”

You just input a package name, and it gives you a full breakdown.

⚙️ How it works

  • 🔥 Firecrawl → finds npm + GitHub URLs dynamically
  • GitHub API → stars, issues, last commit
  • npm API → weekly downloads
  • 🤖 AI agent → converts raw data into insights + recommendation

📊 Output (this is the interesting part)

Instead of just numbers, it gives:

  • Risk score → Low / Medium / High
  • Adoption level → Very popular / Niche
  • Issue health
  • Alternatives (with trade-offs)
  • Final recommendation → Use / Consider / Avoid

Also separates:

  • Observed facts (data)
  • Inferred insights (AI reasoning)

😅 Challenges I hit

  • Scraping npm/GitHub pages didn’t work well (JS-rendered data missing)
  • AI-only approach was slow and inconsistent
  • Mapping correct GitHub repo dynamically was tricky
  • Handling invalid packages + edge cases took more effort than expected

🔑 Biggest takeaway

The best combo ended up being:

👉 Firecrawl (discovery) + APIs (reliable data) + AI (reasoning)

🤔 Curious

Would you actually use something like this before choosing a library?

Or do you prefer manual evaluation?

Happy to share more details if anyone’s interested 👍

Check out the workflow here : https://n8n.io/workflows/14911

u/divyanshu_gupta007 — 4 days ago
▲ 7 r/n8n_ai_agents+4 crossposts

Been working on a side project that needs a persistent knowledge layer on top of Hermes Agent and I'm trying to figure out the cleanest way to package it as a skill.

For context - Hermes memory handles personal context well (what you do, your preferences, session history) but my project needs something separate for external knowledge. Domain-specific sources, research docs, things the agent needs to know that aren't tied to a specific conversation.

I've been studying how some community skills handle this and the pattern that keeps coming up:

- Ingest external sources on command

- Compile them into structured queryable format

- Expose via MCP so the agent can call it natively

- Let answers compound into new pages over time

A few specific questions for anyone who's gone through this:

**1. Skill vs MCP server vs background infra**

Which integration path actually works best in practice? I'm leaning toward MCP server because it keeps things modular but curious if others found a different approach cleaner.

**2. Semantic search vs keyword retrieval**

Is embedding-based search worth the overhead for a Hermes skill or is keyword search good enough for most use cases?

**3. Quality control**

As the knowledge base grows, how are people keeping it clean? Broken links, orphaned pages, inconsistencies — does anyone have an automated approach for this?

**4. Source attribution**

Has anyone built citation tracking into their knowledge layer? The "did the AI make this up" problem feels important to solve at the infrastructure level rather than prompting around it.

**5. Multi-provider support**

Is it worth building provider-agnostic from day one or is it premature optimization at the early stage?

I know llm-wiki-compiler just shipped v0.2.0 and seems to have tackled most of these — paragraph-level source citations, automated linting, semantic search, MCP server, Obsidian integration, multi-provider support. Might just study that codebase as a reference.

But genuinely curious how others have approached this. What worked, what didn't, what you'd do differently

reddit.com
u/Final_Elevator_1128 — 3 days ago

FREE WORK

Hey guys, I'm looking to get some experience on building automations on n8n. I will build you any automation that you need for FREE, in exchange for a testimonial.

reddit.com
u/Temporary-Mine2908 — 4 days ago