u/ArifAlizadeh

I found a prompt skill system that makes AI outputs way better
▲ 3 r/AiAutomations+1 crossposts

I found a prompt skill system that makes AI outputs way better

I’ve been testing a lot of AI tools lately, and one thing keeps showing up:

most bad outputs are not really model problems - they’re prompt problems.

If you give an AI a vague request, you usually get a vague answer back.

If you give it context, goal, audience, format, and constraints, the output gets much better.

I found a prompt helper that seems built around that idea.

What it does:

- works across tools like ChatGPT, Cursor, Gemini, Claude, Midjourney, ElevenLabs, and others.

- asks 3 clarifying questions before generating the final prompt.

- extracts the goal, context, audience, format, and other important details from your rough idea.

- removes unnecessary fluff so the final prompt is tighter and more token-efficient.

The useful part is that it’s not just rewriting your text.

It’s trying to turn a messy thought into something structured enough for an AI agent or model to actually work with.

That matters a lot if you’re building with AI agents, because the quality of the input usually decides how useful the output is.

A lot of people focus on tools and models first, but in practice the real leverage often comes from:

- better task framing,

- better prompt structure,

- and less ambiguity upfront.

That’s what stood out to me here.

The repo is called prompt-master and it’s the kind of thing that can be useful whether you’re prototyping agents, writing workflows, or just trying to get more consistent results from multiple models.

Repo: https://github.com/nidhinjs/prompt-master

u/ArifAlizadeh — 3 days ago

Cold email gets a lot easier when the idea is actually clear

A lot of cold email fails before the email is even written.

People usually blame the subject line, deliverability, copy, or sending setup.

But in most cases, the real problem is much simpler:

the idea was never clear enough.

If the offer is vague, the email will be vague.

If the angle is weak, the message will feel generic.

If the business outcome is not obvious, people will ignore it.

That’s why I’ve started paying more attention to prompt systems that help turn a rough idea into something cleaner before you even start writing.

The useful part is not just “write the email.”

It’s forcing the thinking to happen first.

A good system should help you figure out:

- what you’re actually trying to do,

- who you’re talking to,

- what problem matters,

- what outcome the business cares about,

- and what needs to be removed so the prompt stays tight.

That matters because cold email only works when the message feels relevant fast.

Nobody wants to sit there and decode what you meant.

They want to know quickly:

- what you do,

- who it’s for,

Why does it matter,

- and whether it helps them make or save money.

That’s why better prompts usually lead to better emails.

A vague prompt gives you a vague draft.

A structured prompt gives you a sharper angle.

A sharper angle makes the message easier to understand.

And if the message is easier to understand, the odds of a reply usually go up.

That part gets underestimated a lot.

Most outreach fails because it’s too broad, too generic, or too hard to care about.

A clearer prompt helps fix that before the first line is even written.

I also like this kind of thing because it’s not locked to one tool.

It can even help whether you’re working in ChatGPT, Claude, Gemini, Cursor, Midjourney, ElevenLabs, or whatever else you use.

So it’s not really about AI hype.

It’s about making the thinking cleaner before you ask AI to do the work.

That matters a lot in cold email because the quality of the prompt usually shows up directly in the quality of the outreach.

If you’re trying to write cold email for founders, agencies, operators, or any business audience, clarity is the difference between:

- an email that gets ignored,

- and an email that feels specific enough to deserve a reply.

That’s why I think this is actually useful.

Not because it’s flashy.

Because it fixes a real problem.

And I’m sharing it for free because I’m a big believer in Alex Hormozi idea of giving away the secret and charging for implementation.

The information itself should be easy to get.

What people usually pay for is:

- setup,

- customization,

- execution,

- and making it work inside a real business.

That’s the hard part.

That’s the part that creates results.

That’s the part worth paying for.

So I’m happy to give away the framework for free.

If someone wants help adapting it to their own workflow, that’s where the real work starts.

If you want the exact framework, comment cold email, and I’ll share it.

reddit.com
u/ArifAlizadeh — 4 days ago

Seeing “Failed to fetch” in your AI agent? Here’s usually how to fix it.

If your agent can’t open a website, read an article, or pull data, that’s often not a model problem - it’s a tooling problem.

Most LLMs can’t browse the web out of the box. They need external tools for that.

Useful tools for different jobs:

Firecrawl - scraping and crawling entire sites.

URL:

https://www.firecrawl.dev/use-cases/ai-mcps

Jina Reader - turns any URL into clean markdown, great for PDFs and academic papers.

URL:

https://jina.ai/reader/

Tavily - search + extraction for fresh web content.

URL:

https://www.tavily.com/

Exa - semantic search for companies, code, and research.

URL:

https://exa.ai/mcp

Playwright - for JS-heavy sites where normal fetching fails.

URL:

https://github.com/microsoft/playwright-mcp

Apify - ready-made actors for LinkedIn, Amazon, Instagram, and more.

URL:

https://mcp.apify.com/

X API - access to X/Twitter through the official API, not HTML scraping.

URL:

https://github.com/xdevplatform/xmcp

All these services have free credits.

I’ve personally been using Tavily’s free tier, and I’ve been pretty happy with it.

What tools are you using for web research in your agents?

reddit.com
u/ArifAlizadeh — 6 days ago

Teach you AI how to research correctly

Seeing “Failed to fetch” in your AI agent? Here’s usually how to fix it.

If your agent can’t open a website, read an article, or pull data, that’s often not a model problem - it’s a tooling problem.

Most LLMs can’t browse the web out of the box. They need external tools for that.

Useful tools for different jobs:

Firecrawl - scraping and crawling entire sites.

URL:

https://www.firecrawl.dev/use-cases/ai-mcps

Jina Reader - turns any URL into clean markdown, great for PDFs and academic papers.

URL:

https://jina.ai/reader/

Tavily - search + extraction for fresh web content.

URL:

https://www.tavily.com/

Exa - semantic search for companies, code, and research.

URL:

https://exa.ai/mcp

Playwright - for JS-heavy sites where normal fetching fails.

URL:

https://github.com/microsoft/playwright-mcp

Apify - ready-made actors for LinkedIn, Amazon, Instagram, and more.

URL:

https://mcp.apify.com/

X API - access to X/Twitter through the official API, not HTML scraping.

URL:

https://github.com/xdevplatform/xmcp

All these services have free credits.

I’ve personally been using Tavily’s free tier, and I’ve been pretty happy with it.

What tools are you using for web research in your agents?

reddit.com
u/ArifAlizadeh — 6 days ago