
I built a reusable Telegram approval bot for n8n — stops AI-generated content from going live without review (workflow JSON included)
I saw several requests in the thread "Human-in-the-loop for AI content" asking
how to pause a workflow until a person signs off. The solution I landed on is
a pair of n8n workflows that let you generate a Twitter thread and a LinkedIn
post with GPT-4o-mini, then gate the publishing step behind a Telegram
approval button. The full repo is open-source:
https://github.com/enzoemir1/n8n-telegram-approval
How the main workflow works
1. Webhook trigger -- receives a JSON payload containing the raw blog post.
2. Parallel OpenAI nodes -- each node gets a platform-specific system prompt.
Twitter: "Write a hook, compress the content, and split into a 5-tweet
thread."
LinkedIn: "Turn the article into a professional story, keep paragraph breaks."
3. Telegram node -- sends the two drafts to a private chat with an inline
keyboard:
{
"reply_markup": {
"inline_keyboard": [
[
{ "text": "Approve", "callback_data": "approve" },
{ "text": "Reject", "callback_data": "reject" }
]
]
}
}
4. Wait node -- this is the tricky part. Set the mode to "Wait for Webhook"
with a webhook suffix for the Telegram callback. The workflow instance pauses
and resumes exactly where it left off once the user clicks a button.
5. IF node -- routes the flow based on the callback value.
Approve: calls the publishing APIs (Twitter, LinkedIn).
Reject: logs the decision and ends the run.
A second, simpler workflow strips out the OpenAI steps and just forwards any
incoming data to the Telegram approval step -- useful for non-text use cases
like invoice approval or deployment gates.
Running the workflow costs roughly $0.003 per execution with gpt-4o-mini.
Adding a few few-shot examples to the prompts dropped the rejection rate from
~30% to ~10%.
How are you handling quality control for AI-generated content in your own
automations? Any edge cases this pattern doesn't cover?

