r/n8n

I built a WhatsApp + voice AI agent in n8n that handles 90% of customer service. Sold the business, the buyer kept it running without me
🔥 Hot ▲ 106 r/n8n

I built a WhatsApp + voice AI agent in n8n that handles 90% of customer service. Sold the business, the buyer kept it running without me

https://preview.redd.it/m1bxfpxxs4tg1.jpg?width=2512&format=pjpg&auto=webp&s=d5328c59e7a695a2763b29235216a064a28fcc8a

I owned a device repair shop for 16 years. I was losing 80+ hours a month answering the same WhatsApp messages: "how much to fix my screen?", "when can I pick it up?", "do you have this part?"

So I built an AI agent that handles both WhatsApp and voice calls. It's been running in production for over a year at <€200/month.

What it does:

Customers message on WhatsApp or call the shop. An n8n router classifies what they need and routes to one of 4 specialized sub-agents:

  • One books appointments by checking real availability and confirming the slot
  • One gives accurate quotes by looking up the actual device model and repair type in the database
  • One checks stock in real-time and places internal orders when parts are missing
  • One escalates to a human with full conversation context when it can't handle something

The voice channel goes through ElevenLabs. The sub-agents don't care if they were triggered by text or voice — same logic, different entry point.

What surprised me:

The "dumb" decisions mattered more than the AI:

  • Using a different model per agent saved a lot of money. The booking agent doesn't need GPT-4 — a fast cheap model works fine for parsing "next Tuesday at 3pm". The quote agent needs accuracy so it gets the better model.
  • Pseudo-streaming on WhatsApp made a huge difference in perceived speed. Instead of sending one long message, I split the response into sentences and send them one by one. Users see typing indicators and feel like they're talking to someone.
  • The think tool on the router (making it reason before picking a tool chain) cut errors by roughly half.
  • Airtable as single source of truth meant the agents were always working with real data — prices, stock, bookings. No sync issues.

Results after 12+ months:

  • ~90% of interactions handled without a human
  • ~80 hours/month freed up
  • Running cost: <€200/month total
  • Response time: <30 seconds
  • Available 24/7 instead of just store hours

The real test:

I sold the business in 2025. The buyer had zero technical knowledge. All the AI systems — the WhatsApp agent, the voice agent, the automations — kept running without me. That's when I knew the architecture was right.

What I'd do differently today:

  • Claude instead of GPT for the router. Better at structured tool calling in my experience
  • Observability from day 1 instead of relying on n8n execution logs

I open-sourced the workflows if anyone wants to look at the actual implementation. Happy to answer questions.

Workflows: https://github.com/santifer/jacobo-workflows

reddit.com
u/Beach-Independent — 14 hours ago
I automated my entire X content strategy with n8n. here's everything, open sourced
▲ 31 r/n8n

I automated my entire X content strategy with n8n. here's everything, open sourced

I automated my entire X content strategy with n8n. here's everything, open sourced

been running this for a few months now and figured I'd just put it all out there

three workflows. all connected. all running on their own.

what's in the repo:

X Posting Bot - 57 nodes, 8 scheduled slots throughout the day. each slot has a different content type (wildcards, exploits, experiments, CTAs). it pulls context from airtable, runs it through an AI agent to generate a tweet, then puts the draft through a self-critique loop. if it passes the quality check it posts. if it fails, it retries. if it hits max retries it skips the slot and pings me on telegram. the whole thing runs without me touching it.

https://preview.redd.it/9ts7ffibd6tg1.png?width=1375&format=png&auto=webp&s=e3b9ed9c16fb375ca4cfb170f957c1bd9cdb18af

Research Pipeline - feeds the posting bot. runs daily and weekly, scrapes X via apify for the topics I care about, normalises the data, stores it in airtable. the posting bot pulls from this when generating content so it's always working with fresh context.

https://preview.redd.it/o0mu3k8ed6tg1.png?width=1390&format=png&auto=webp&s=ca29bb860b1ef77cf79c4660b8610f0d7f1481ce

Learning Workflow - closes the feedback loop. pulls performance data from recent posts, runs it through claude to extract what's working and what isn't, writes the patterns back to airtable. the posting bot reads these learnings before generating new tweets. over time it gets better.

https://preview.redd.it/lf9p2smfd6tg1.png?width=1465&format=png&auto=webp&s=877b11a951793a7e8b1e0ded8129f849d5c51b85

how I built them:

didn't build these manually. used claude code with the synta MCP

the synta MCP gives claude actual read/write access to your n8n instance - it's not just docs lookups, it's building workflows directly, importing them, debugging when nodes break, and self-healing without you going back in manually

the self-healing thing is genuinely what made this practical. when something breaks it catches the failure, fixes the node, re-triggers to verify, and keeps going. I'm not babysitting the canvas

(before someone says "just use the n8n MCP it's free" - I used it. started with it at the agency. comparing a butter knife to a fucking chainsaw and if you still confused n8n mcp is the butter knife. I genuinely don't have the time or energy to go into why right now so if you want to argue about it just go try synta yourself and come back to me)

if you want to set this up yourself:

grab the JSONs from the repo, import into n8n, swap in your own API keys and airtable base

if you want to actually adapt and build on top of it I'd recommend doing it the same way - claude code + synta MCP. give claude your API tokens, tell it what you want to change, and it'll handle the wiring. takes 5-10 mins to get up and running vs spending a day figuring out the node connections yourself

repo here and in links: https://github.com/MrNozz/n8n-workflows-noz

happy to answer questions on any of the specific nodes or how the critique loop works

reddit.com
u/Professional_Ebb1870 — 8 hours ago
Image 1 — Business verified, but I still can't register a phone number or generate access tokens for WhatsApp Cloud API
Image 2 — Business verified, but I still can't register a phone number or generate access tokens for WhatsApp Cloud API
Image 3 — Business verified, but I still can't register a phone number or generate access tokens for WhatsApp Cloud API
▲ 2 r/n8n

Business verified, but I still can't register a phone number or generate access tokens for WhatsApp Cloud API

u/LiinooLikePerico — 1 hour ago
I built a reusable Telegram approval bot for n8n — stops AI-generated
  content from going live without review (workflow JSON included)
▲ 1 r/n8n

I built a reusable Telegram approval bot for n8n — stops AI-generated content from going live without review (workflow JSON included)

I saw several requests in the thread "Human-in-the-loop for AI content" asking

   how to pause a workflow until a person signs off. The solution I landed on is

   a pair of n8n workflows that let you generate a Twitter thread and a LinkedIn

   post with GPT-4o-mini, then gate the publishing step behind a Telegram     

  approval button. The full repo is open-source:

  https://github.com/enzoemir1/n8n-telegram-approval

  How the main workflow works

  1. Webhook trigger -- receives a JSON payload containing the raw blog post.

  2. Parallel OpenAI nodes -- each node gets a platform-specific system prompt.

  Twitter: "Write a hook, compress the content, and split into a 5-tweet

  thread."                                                                      

  LinkedIn: "Turn the article into a professional story, keep paragraph breaks."

  3. Telegram node -- sends the two drafts to a private chat with an inline     

  keyboard:                                                                     

  

  {                                                                             

"reply_markup": {                                                         

"inline_keyboard": [

[

{ "text": "Approve", "callback_data": "approve" },

{ "text": "Reject", "callback_data": "reject" }

]

]                                                                         

}

  }                                                                             

  4. Wait node -- this is the tricky part. Set the mode to "Wait for Webhook"   

  with a webhook suffix for the Telegram callback. The workflow instance pauses

  and resumes exactly where it left off once the user clicks a button.          

  5. IF node -- routes the flow based on the callback value.                  

  Approve: calls the publishing APIs (Twitter, LinkedIn).

  Reject: logs the decision and ends the run.

  A second, simpler workflow strips out the OpenAI steps and just forwards any  

  incoming data to the Telegram approval step -- useful for non-text use cases  

  like invoice approval or deployment gates.                                    

  Running the workflow costs roughly $0.003 per execution with gpt-4o-mini.

  Adding a few few-shot examples to the prompts dropped the rejection rate from

  ~30% to ~10%.

  How are you handling quality control for AI-generated content in your own     

  automations? Any edge cases this pattern doesn't cover?

u/SignificantLime151 — 2 hours ago
▲ 1 r/n8n

How to setup LinkedIn &amp; Reddit credentials in n8n

I have been trying to build automations for these platforms recebtly but I am facing problem getting their credentials to connect to n8n. Any helpful suggestion/guidamce will be highly appreciated. Thank You!!

reddit.com
u/avish456 — 2 hours ago
Built a Telegram bot that scans food labels and tells you how unhealthy they are (n8n + OpenAI)
▲ 2 r/AIStartupAutomation+1 crossposts

Built a Telegram bot that scans food labels and tells you how unhealthy they are (n8n + OpenAI)

I built a Telegram bot that analyzes packaged food labels just by sending a photo.

👉 GitHub: https://github.com/BigDoor-ai/n8n/tree/main/workflows/Read%20Food%20Labels%20via%20Telegram

It extracts ingredients + nutrition info and breaks the product down into:

- Sugar

- Saturated Fat

- Unhealthy Oils

- Harmful Preservatives

- Healthy Components

Then it gives:

- A health score (0–100)

- A verdict (Healthy / Moderate / Poor)

- Key concerns + positives

- A pie chart showing the risk breakdown

Everything is built using:

- n8n (workflow automation)

- OpenAI (vision + analysis)

- Google Sheets (as a simple database)

- QuickChart (for generating the pie chart)

You just send a product photo on Telegram and get the analysis instantly.

I also made the full workflow public so anyone can replicate or improve it.

Would love feedback, especially on:

- Improving the scoring logic

- Better ways to structure the food database

- Reducing hallucinations from label parsing

Also open to ideas on turning this into a real product.

u/vishesh_allahabadi — 7 hours ago
▲ 3 r/n8n

Free GPT-4.1 API access for ~12hrs — works directly with n8n's OpenAI node

Hey n8n folks,

Stress testing my OpenAI-compatible reverse proxy gateway. Since it's fully OpenAI-compatible, it just drops into n8n's OpenAI node with zero config changes — just swap the base URL.

Available models:

  • gpt-4.1 — Latest, 1M context
  • gpt-4.1-mini / gpt-4.1-nano
  • o4-mini — reasoning
  • gpt-4o-mini-tts — TTS node compatible

Comment your workflow type and I'll DM the endpoint + key.
(Non-commercial side project, no paid tier)

reddit.com
u/NefariousnessSharp61 — 7 hours ago
▲ 2 r/AIStartupAutomation+1 crossposts

I'm building a stress test workflow to benchmark document extraction – here's what I'm testing

👋 Hey everyone,

Over the past few weeks I've been sharing workflows that use document extraction for things like currency conversion, invoice classification, duplicate detection, and Slack-based approvals. One question that keeps coming up – from myself and from people trying these workflows – is: how far can you push the extraction before it breaks?

Clean PDFs are easy. Every solution handles those. But what about a scanned invoice with coffee stains? A photo taken at an angle? A completely different layout than what the pipeline was trained on? A document that looks like someone used it as a coaster, scribbled notes all over it, and then left it in the rain?

I wanted to answer that properly, so I'm building a stress test workflow.

The idea:

Upload a document through a web form, extract the data, compare every single field against the known correct values, and get a results page with a per-field pass/fail breakdown and an overall accuracy percentage. Since the test always uses the same invoice data, the ground truth is fixed – you're purely measuring how well the extraction handles degraded quality and layout changes.

The test documents I'm preparing:

I'm going to run four versions of the same invoice through the workflow:

  1. Original – clean PDF, the baseline. Should be 100%.
  2. Layout Variant A – same data, completely different visual layout
  3. Layout Variant B – another layout, different structure again
  4. Version 7 ("The Survivor") – this one has coffee stains, pen annotations ("WRONG ADDRESS? check billing!"), scribbled-out sections, burn marks, and a circled-over amount due field. If anything can extract data from this, I'll be impressed.

I spent some time thinking about what makes a good stress test. Different layouts test whether the extraction actually reads the document or just memorises positions. The destroyed version tests OCR resilience when half the text is obstructed. Together they should give a pretty honest picture of where a solution actually stands.

What's coming next week:

I'm going to build out the full workflow, run all four documents through it, and share the results here – accuracy percentages across every version, including the destroyed one. I'll also share the workflow JSON, so anyone can import it and run their own benchmarks.

The workflow will be solution-agnostic too – you'll be able to swap out the extraction node for an HTTP Request node pointing at any other API, and the entire validation chain works identically. Good way to benchmark different tools side by side.

Curious to see where it breaks. Would love to hear if anyone else has been stress testing their extraction setups, or if you have ideas for even nastier test documents.

Best,
Felix

reddit.com
u/easybits_ai — 6 hours ago
▲ 1 r/n8n

Download pictures from a Website

I am a complete newbie in n8n.
n8n runs local on my proxmox pc.
My first project is a automated download from a website with a lot of car pictures :

https://unsplash.com/de/s/fotos/cooles-auto

https://unsplash.com/de/fotos/ein-orange-weisses-auto-das-vor-einem-gewasser-geparkt-ist-Ynycw1OzZdI

This is the first pic in full resolution.
Now i like to download the pic and go to the next pic in the gallery and download and so on.

How can i automate this ?
do i need an ai model for this ?

My thoughts :
I need to scrape the static site and filter for the direct link.
For the first pic, this is
https://plus.unsplash.com/premium_photo-1664303847960-586318f59035

save as downloads the pic in full resolution. nice.

I have no idea how to do this in n8n.
And how do i jump to the next page ?
With the browser i click the arrow and repeat the process.

Thank you in advance

reddit.com
u/Guilty_Elk8070 — 4 hours ago
▲ 2 r/n8n

Disk Space

Now I’m running n8n locally and I said here before that I had a problem making WhatApp chatbot and Telegram. Like every time I run the trigger it says “Invalid Parameter” I saw people saying use ngrok and docker then I tried to download them. Ngrok was fine but not docker. I saw docker requires a lot of disk space and I don’t have enough space for it. And I don’t want to pay any subscriptions at the moment because I’m just testing things and making my first workflow. So I’m wondering if any one of you have a good solution for that.

Thanks.

reddit.com
u/Forsaken_Clock_5488 — 7 hours ago
Stop wasting n8n executions on Google Drive monitoring
▲ 2 r/n8n

Stop wasting n8n executions on Google Drive monitoring

The built-in Google Drive trigger polls every minute. That's 1,440 executions a day per folder. Imagine monitoring 6 or 7 folders....
Your N8N cloud subscription cant handle that :-D

Here's how to replace it with a real-time webhook that only triggers when something actually changes. Zero polling, zero wasted executions.

Here are both workflows:
https://github.com/Peter-Aistralis/YouTube/blob/main/1_Google_drive_watch
and
https://github.com/Peter-Aistralis/YouTube/blob/main/1.Google_Drive_Receive

Happy to answer questions.

Peter

youtu.be
u/Steve_Ignorant — 8 hours ago
From Google Sheets to Slack alerts: My Zapier-free n8n recipe        
  (step-by-step)
▲ 1 r/n8n

From Google Sheets to Slack alerts: My Zapier-free n8n recipe (step-by-step)

Just migrated a client off Zapier and saved them $20/month with a 5-node n8n  

  workflow. Took 30 min to build, runs on my self-hosted instance (totally free,

   but n8n.cloud free tier works too).                                          

  Here's the flow:

  1. Google Sheets "Watch Rows" -- polls every 5 min for new sales rows.

  2. IF node -- only lets Status === "Closed" through.

  3. HTTP Request -- hits our CRM to grab the customer's full name.             

  4. Function node -- formats the Slack message:

  const amount = $json["Amount"];                                             

  const name = $json["CustomerName"];

  return [{

json: {                                                                     

text: `*New sale!*\n*Amount:* $${amount}\n*Customer:* ${name}\n*Status:* 

  Closed`                                                                       

}                                                                         

  }];

  5. Slack node -- posts to #sales with the markdown.                           

  

  No rate-limit headaches, no Zapier task caps, and I can version-control the   

  whole thing in git. The client was shocked when I told them it's free forever.

  All eight of my free n8n workflows are on GitHub:                             

  github.com/enzoemir1/autoflow-n8n-workflows

  Anyone else ditched Zapier for n8n?

u/SignificantLime151 — 5 hours ago
▲ 3 r/n8n

I need help

Hi! I’m currently learning n8n by following tutorials and building workflows step by step. However, I’ve noticed that sometimes I’m just following along without fully understanding all the details of the nodes and their internal configurations.

For example, when I use the Telegram node, I don’t always understand what each field does or the purpose behind certain options. Because of that, when I try to rebuild the same workflow (or something similar) on my own, I often find myself going back to the video to double-check the configuration instead of confidently recreating it from memory.

I really want to move beyond just copying steps and start actually understanding how everything works so I can build workflows independently and troubleshoot issues on my own.

So I wanted to ask:
What would be the best learning path or strategy to deeply understand n8n nodes and workflows? Also, if you have any recommended resources—such as courses, documentation, practice methods, or even personal tips that helped you master n8n—I would really appreciate it.

Thanks a lot in advance 🙏

reddit.com
u/Less-Knowledge-5061 — 11 hours ago
▲ 7 r/n8n

Automated News-Driven Crypto Trade Opportunity Scanner (POC Project)

Crypto News + AI Trade Opportunity Scanner (POC)

Hey everyone 👋

I built a small end-to-end data pipeline that:

  • Detects active crypto (gainers, losers, volume)
  • Pulls news from multiple sources
  • Uses AI to explain why prices are moving
  • Outputs structured insights via an API (n8n backend)

👉 Goal: Not just what is moving — but why it’s moving

🔗 Demo

https://macaies.github.io/Crypto-Market-Intelliggence-Pipeline/crypto-dashboard.html

💻 GitHub

https://github.com/Macaies/Crypto-Market-Intelliggence-Pipeline

⚠️ Just a POC / learning project — not financial advice

Would love feedback on:

  • Pipeline design
  • Features to add
  • Making it production-ready

https://reddit.com/link/1sbx0rv/video/0qc0f0aa23tg1/player

reddit.com
u/Able_Sock2086 — 19 hours ago
▲ 3 r/n8n

Testing AI workflows? Qwen3.6 Plus is currently 100% FREE on OpenRouter. Perfect for dev/test loops! ⚙️🔥

Hey fellow automators,

We all know that building and debugging complex AI workflows in n8n—especially when dealing with heavy loops, JSON extraction, or testing multi-agent setups—can burn through API credits really fast.

If you are prototyping this weekend, here is a massive tip: Qwen3.6 Plus just dropped, and OpenRouter is currently offering it for 100% FREE. Zero cents per token.

It’s an ultra-high-performing proprietary model (holding its own against the big players), which means it's smart enough to actually follow strict system prompts and output reliable JSON for your next nodes.

Why it’s perfect for your n8n setups right now: You can just swap your usual OpenAI/Anthropic credentials with OpenRouter in your Advanced AI / LangChain nodes, select the free Qwen3.6 Plus model, and spam your test executions without worrying about the cost.

It’s the absolute best way to build, test, and torture your dev workflows this weekend before switching back to your production models.

Go burn those tokens! Let me know what kind of workflows you're testing it on.

reddit.com
u/Fresh-Daikon-9408 — 14 hours ago
▲ 1 r/n8n

How to upload files to CDN in n8n with one HTTP Request node

Quick tip if you need file uploads in your n8n workflows.

Workflow JSON: https://gist.github.com/mussemou/5596f9e37ab06fcb4712dcdd8d6ff102

I built FilePost (https://filepost.dev), a simple file hosting API. Wire it up in n8n with a single HTTP Request node:

Method: POST

URL: https://filepost.dev/v1/upload

Header: Authorization: Bearer YOUR_API_KEY

Body: Form Data with your file

Response: permanent public CDN URL (Cloudflare)

No S3 buckets, no IAM roles, no bandwidth fees. Free tier available.

I wrote a step by step walkthrough with screenshots here:

https://filepost.dev/blog/how-to-send-files-in-n8n-workflows

Happy to help if anyone has questions about the setup.

u/Zestyclose_Pack_8493 — 8 hours ago
N8n WhatsApp business api
▲ 1 r/n8n

N8n WhatsApp business api

When i try to get my access token , meta gives me this message abt having a problem registering my phone number, please if anyone knows the solution ? Or the source of problem

u/AYU_UB — 8 hours ago
email_management_automation (Part 2) — now with frontend dashboard
▲ 1 r/n8n

email_management_automation (Part 2) — now with frontend dashboard

Most emails are just… noise.

So I built a system that automatically reads, classifies, and turns emails into actual actions.

What it does:

  • Pulls emails via IMAP
  • Uses AI to classify them (support, invoice, meeting, etc.)
  • Extracts useful data (priority, summary, attachments)
  • Stores everything in PostgreSQL
  • Triggers actions → Slack, Zendesk, Google Calendar

What’s new (Part 2):
I added a frontend dashboard so you can actually see what's happening behind the scenes.

  • Real-time email tracking
  • Workflow status monitoring
  • Clean UI to visualise the pipeline

👉 https://email-management-automation.vercel.app/

Still a work in progress, but aiming to make it production-ready.

Curious how others are handling email automation — any suggestions or improvements?

https://preview.redd.it/mgf9jqhtw5tg1.png?width=1207&format=png&auto=webp&s=5b7fe8358ce5bba3e057baed9576223b7d7fbf40

https://preview.redd.it/tfqx0q6k36tg1.png?width=2566&format=png&auto=webp&s=2120617d258131b29835170791e2eb2d5976c396

reddit.com
u/Able_Sock2086 — 9 hours ago
▲ 1 r/n8n

Usar n8n para scraping + Claude Code pra app: vale a pena ?

Fala pessoal,

Tô construindo um SaaS e comecei a coletar dados via scraping.

Em vez de criar um backend completo do zero, pensei em usar o n8n como camada de automação:

  • n8n faz o scraping (HTTP + parsing)
  • salva os dados no banco (tipo Supabase)
  • meu app (feito com Claude Code) só consome esses dados

A ideia é reduzir tempo de desenvolvimento e validar mais rápido.

reddit.com
u/Level-Shape-4344 — 10 hours ago
▲ 1 r/n8n

Looking for a Sales Partner (Commission-Based) – Automation / Data Scraping

Hello!

I’m a backend automation engineer with 5+ years of experience building high-reliability scraping & data systems.

Previously active on Upwork (100% Job score, multiple 5⭐ projects):
https://www.upwork.com/freelancers/~018fcec52dc5298a2e

Over time I’ve worked on:

• Automotive marketplaces (IAAI, Carvana, dealer networks)
• Government & legal data (UCC filings, lien records)
• Compliance platforms (backflow systems, utility data tools)
• Anti-bot / protected environments

💡 What I’m looking for:

Someone who can bring in clients (agencies, startups, businesses needing data pipelines or automation)

You handle:
• Outreach / leads / closing

I handle:
• Tech / delivery / scaling

💰 Structure:

• Commission per deal (we can agree % based on deal size)
• Potential to turn into long-term partnership

📈 Ideal clients:

• Businesses needing continuous data (pricing, inventory, leads)
• People struggling with blocked scrapers / unreliable pipelines
• Agencies outsourcing data infrastructure

If you’re good at sales and want to build something long-term instead of chasing one-off gigs, DM me.

Let’s build something serious.

reddit.com
u/Evening-Development3 — 10 hours ago
Week