Built an AI agent that tells you whether an npm package is worth using (n8n + Firecrawl challenge)
I recently worked on the “Build the Ultimate Web Crawler Agent with Firecrawl” (March n8n challenge) and ended up building something pretty useful for dev workflows.
💡 The problem
If you’ve ever evaluated an npm package, you know the drill:
- Check npm downloads
- Open GitHub → stars, issues, commits
- Look for activity / maintenance
- Compare alternatives
Takes like 15–30 minutes per package
🚀 What I built
I created an AI-powered package evaluator that answers:
👉 “Should I use this package or not?”
You just input a package name, and it gives you a full breakdown.
⚙️ How it works
- 🔥 Firecrawl → finds npm + GitHub URLs dynamically
- GitHub API → stars, issues, last commit
- npm API → weekly downloads
- 🤖 AI agent → converts raw data into insights + recommendation
📊 Output (this is the interesting part)
Instead of just numbers, it gives:
- Risk score → Low / Medium / High
- Adoption level → Very popular / Niche
- Issue health
- Alternatives (with trade-offs)
- Final recommendation → Use / Consider / Avoid
Also separates:
- Observed facts (data)
- Inferred insights (AI reasoning)
😅 Challenges I hit
- Scraping npm/GitHub pages didn’t work well (JS-rendered data missing)
- AI-only approach was slow and inconsistent
- Mapping correct GitHub repo dynamically was tricky
- Handling invalid packages + edge cases took more effort than expected
🔑 Biggest takeaway
The best combo ended up being:
👉 Firecrawl (discovery) + APIs (reliable data) + AI (reasoning)
🤔 Curious
Would you actually use something like this before choosing a library?
Or do you prefer manual evaluation?
Happy to share more details if anyone’s interested 👍
Check out the workflow here : https://n8n.io/workflows/14911