r/AIStartupAutomation

A local LAN radio station that gives you ambient audio awareness of your AI coding agents
▲ 3 r/vibecoding+3 crossposts

A local LAN radio station that gives you ambient audio awareness of your AI coding agents

Music plays continuously. When agents finish tasks, hit errors, or need attention, a voice announces it over the stream. You hear what's happening from the kitchen, the couch, or wherever you are.

How it works

  1. Ambient music plays via Liquidsoap + Icecast
  2. External systems POST events to a webhook
  3. The Brain translates events to speech via Kokoro TTS
  4. Liquidsoap ducks the music and plays the announcement
  5. Any HTTP audio client connects to the stream

Repo: https://github.com/nmelo/agent-radio

u/nmelo — 2 hours ago
▲ 3 r/AIStartupAutomation+2 crossposts

AI Prompt That Shows You Where to Find Clients

Most people fail because they don’t know where clients are.

This prompt solves that.

Prompt:

Act as a client acquisition strategist.

Your task:

Suggest platforms

Show how to find clients

Create daily routine

Suggest outreach numbers

Improve response rate

Service: [Insert]

Example: Content writing

Where do you usually look for clients?

reddit.com
u/Pt_VishalDubey — 2 hours ago
▲ 3 r/AIStartupAutomation+2 crossposts

I'm building a stress test workflow to benchmark document extraction – here's what I'm testing

👋 Hey everyone,

Over the past few weeks I've been sharing workflows that use document extraction for things like currency conversion, invoice classification, duplicate detection, and Slack-based approvals. One question that keeps coming up – from myself and from people trying these workflows – is: how far can you push the extraction before it breaks?

Clean PDFs are easy. Every solution handles those. But what about a scanned invoice with coffee stains? A photo taken at an angle? A completely different layout than what the pipeline was trained on? A document that looks like someone used it as a coaster, scribbled notes all over it, and then left it in the rain?

I wanted to answer that properly, so I'm building a stress test workflow.

The idea:

Upload a document through a web form, extract the data, compare every single field against the known correct values, and get a results page with a per-field pass/fail breakdown and an overall accuracy percentage. Since the test always uses the same invoice data, the ground truth is fixed – you're purely measuring how well the extraction handles degraded quality and layout changes.

The test documents I'm preparing:

I'm going to run four versions of the same invoice through the workflow:

  1. Original – clean PDF, the baseline. Should be 100%.
  2. Layout Variant A – same data, completely different visual layout
  3. Layout Variant B – another layout, different structure again
  4. Version 7 ("The Survivor") – this one has coffee stains, pen annotations ("WRONG ADDRESS? check billing!"), scribbled-out sections, burn marks, and a circled-over amount due field. If anything can extract data from this, I'll be impressed.

I spent some time thinking about what makes a good stress test. Different layouts test whether the extraction actually reads the document or just memorises positions. The destroyed version tests OCR resilience when half the text is obstructed. Together they should give a pretty honest picture of where a solution actually stands.

What's coming next week:

I'm going to build out the full workflow, run all four documents through it, and share the results here – accuracy percentages across every version, including the destroyed one. I'll also share the workflow JSON, so anyone can import it and run their own benchmarks.

The workflow will be solution-agnostic too – you'll be able to swap out the extraction node for an HTTP Request node pointing at any other API, and the entire validation chain works identically. Good way to benchmark different tools side by side.

Curious to see where it breaks. Would love to hear if anyone else has been stress testing their extraction setups, or if you have ideas for even nastier test documents.

Best,
Felix

reddit.com
u/easybits_ai — 12 hours ago
Built a Telegram bot that scans food labels and tells you how unhealthy they are (n8n + OpenAI)
▲ 2 r/AIStartupAutomation+1 crossposts

Built a Telegram bot that scans food labels and tells you how unhealthy they are (n8n + OpenAI)

I built a Telegram bot that analyzes packaged food labels just by sending a photo.

👉 GitHub: https://github.com/BigDoor-ai/n8n/tree/main/workflows/Read%20Food%20Labels%20via%20Telegram

It extracts ingredients + nutrition info and breaks the product down into:

- Sugar

- Saturated Fat

- Unhealthy Oils

- Harmful Preservatives

- Healthy Components

Then it gives:

- A health score (0–100)

- A verdict (Healthy / Moderate / Poor)

- Key concerns + positives

- A pie chart showing the risk breakdown

Everything is built using:

- n8n (workflow automation)

- OpenAI (vision + analysis)

- Google Sheets (as a simple database)

- QuickChart (for generating the pie chart)

You just send a product photo on Telegram and get the analysis instantly.

I also made the full workflow public so anyone can replicate or improve it.

Would love feedback, especially on:

- Improving the scoring logic

- Better ways to structure the food database

- Reducing hallucinations from label parsing

Also open to ideas on turning this into a real product.

u/vishesh_allahabadi — 12 hours ago
Week