
r/AILearningHub

A free index for AI learners — guides, prompts, skills, tools, glossary
Thought I'd share this here in case it's useful for someone.
I’ve been collecting and organizing practical AI resources in one place, mainly because the AI space is getting noisy quickly and it’s easy to get lost between tools, prompts, models, agents, coding workflows, and terminology.
What’s there:
- Learn — practical guides organized by category: foundations, building & shipping, stacks & systems
- Coding — a handbook for working with AI when building software: stack layers, handoff patterns, repo files, review loops
- Prompts — 115+ prompts across categories like code, productivity, analysis, writing, research, learning, and design
- Skills — 130+ role-based skills across packs like developer, sales, marketer, founder, HR, and customer success
- Tools — 180+ AI tools sorted by what they actually do
- Glossary — 80+ terms explained in plain English
- Compare — head-to-head comparisons between models and tools
If you’re just starting, Learn and Glossary are probably the best places to begin. If you’re already building things, Coding, Prompts, and Skills are probably more useful.
Free, no sign-up, no paywall. I'll throw the links in the comments
Have a good one!
Too many AI courses 😞
Every time I open LinkedIn or Reddit, there's a new "must-take" AI course. It’s getting a bit overwhelming to keep track.
For those of you who are already working with AI -> if you could only recommend ONE course for a beginner, what would it be? Looking for stuff that actually teaches you how to build or use tools, not just the boring theory.
How to use AI effectively for academic purposes
Hi everyone,
I enjoy reading academic sources for academic purposes, and I’ve been using AI as an assistant to navigate these readings. However, I’m struggling with how to use it more efficiently and accurately.
Here’s a recent example: While reading Kuhn’s The Copernican Revolution, I came across his claim that Copernicus's mathematical calculations weren't significantly different or more accurate than Ptolemy's.
I immediately turned to an AI to ask how widely this view is shared among modern scholars of Renaissance Cosmology/Astronomy. The AI summarized the views of several academics and provided names.
The dilemma is this: To be 100% sure, I’d need to dive into those specific papers myself, which is a massive task. On the other hand, I don't want to blindly trust the AI and just think, "Oh, I guess this is the consensus then."
I have two main questions:
I have two main questions:
- What should I look for in an AI model to get more reliable, "grounded" results for academic inquiries?
- How can I improve the quality and reliability of these results? Are there specific strategies to prevent the AI from just "agreeing" with the premise or hallucinating sources?
Thanks in advance for the help!
I spent months trying to learn AI, but I kept drowning in technical papers or surface-level fluff that didn't help me as a PM. I finally got fed up and built a solution.
It actually started at a hackathon where the concept won first place. Since then, I’ve spent 6 weeks refining it into a bite-sized learning tool for people who want to move past the "I'll learn it eventually" phase and actually start applying it.
It's called AI Decoded. Live at aidecoded.info
I’d love your thoughts: What is the #1 thing stopping you from diving into AI right now? Is it the math, the use cases, or just not having enough time?
I've been building this for a while and it's finally at a point where I want to share it publicly. Not a wrapper. Not a system prompt. An actual cognitive architecture.
What it is:
INFJ Bot is a local AI companion built around how an INFJ thinks and processes — not just how one talks. 18,000+ lines of Python, 470+ tests passing, and still actively growing.
The architecture (the part most people skip — don't):
This runs a phased orchestrator: Perception → Reflection → Integration → Aspiration → Expression. That's not metaphorical — those are literal processing stages each message passes through.
Under the hood:
- Global Workspace Theory (GWT) — competitive attention across 22 self-registering cognitive plugins, capacity-limited spotlight (limit: 5), not just a flat context dump
- IIT Consciousness Proxy (Φ) — tracks a 7-dimension qualia space. It's an approximation, but it's a principled one
being.py— subjective self layer: mood, energy, curiosity, attachment, agency — all live state, not hardcodedembodiment.py— body schema (heartbeat, breath, posture, tension, temperature) that actually influences response texturehomeostasis.py— 7 survival needs (energy, coherence, integration, connection, growth, autonomy, integrity) that create internal pressure the model responds tointuition.py— 5 hunch types, felt-sense modeling, pattern recognition with validation historyself_modify.py— recursive self-improvement: assessment, lesson extraction, meta-learning. It actually gets better at being itself.
Memory:
ChromaDB-backed with hybrid retrieval: 55% semantic + 25% importance + 20% recency. Local sentence-transformers (384-dim), fully offline. Memories are treated as context with guardrails — not gospel.
Dual model path:
Primary on Gemini 2.5 Flash, local Ollama fallback (qwen3:4b) if the cloud goes down. There's also an internal critic pass when configured.
Chat modes:
companion, engineer, critic, coach, clarity, researcher, bughunter, drift, quiet
Each mode isn't just a prompt tweak — it changes how the orchestrator weights and routes processing.
Interfaces:
- Interactive terminal chat
- Rich TUI
- FastAPI web UI on
localhost:8765 - One-shot
askfor scripting
Why INFJ specifically?
Because the cognitive stack (Ni-Fe-Ti-Se) maps surprisingly well to real architectural decisions. Ni = long-horizon pattern recognition and memory retrieval weighting. Fe = emotional attunement and relational context. Ti = internal consistency checking (that's what the critic pass is for). Se = environmental grounding via the embodiment layer. It's not just a personality costume — it's a design constraint that shapes the whole system.
Where it's at:
Open source, on GitHub, fully self-hostable. Still evolving. Issues and PRs welcome for bugs or docs — open a discussion before large features.
github.com/timeless-hayoka/infj-bot
Happy to answer technical questions. If you've tried to build something like this or have thoughts on the GWT implementation, I especially want to hear from you.
Hey everyone! I kept running into the same problem with AI guides. They either talk to you like you've never touched a computer, or they assume you already know what a neural network is. There was nothing in between or for students who just want to use AI in their daily life.
Would love to hear what you think and if you see this problem also!
Just started learning AI
So, any AI application is as good as the data it has been fed. Where is all this data stored? How is it accessed? I don't come from CS so I need help guys.
I would really appreciate a fundamental clarification on how an AI application works.
HELP
This one is for all the broke college CS students out there <3
If you're like me, you don't want to pay $20 a month for claude code :(
It's an amazing tool I love, but a recurring expense is the last thing I need. That's why I find myself jumping from tool to tool, using the daily or monthly free tier limits and constantly having to find new free tools.
That's where "AI For Brokies" comes in. Just a simple github repo with a readme file of some free AI tools you can use for building :)
https://github.com/Joe-Huber/AI-For-Brokies
The actual building behind this project was mostly the automatic tool adder, following an issue format! If you want to see it in action, please drop an issue explaining a tool you use and see the bot do it's magic!
Please feel free to leave a star! ⭐️ (pretty please) You can use it to save the list of tools for whenever you run out of credits!
Hi! I'm currently a undergraduate student(almost coming to an end) and going into mtech in the course AI. So before going to mtech whoch starts in August I wanted to learn AI as much as I can. I have learned it before going into ML and DL but it was never into the practical side and also know just a few.
So I just wanted to know how I can start fresh and actually learn and implement things. Can you guys suggest like a good course or youtube videos or anything that actually makes to learn stuff?
Thanks a lot for the help in advance!
hi guys, i built iro ai because i kept seeing beginners get stuck in the “which course should i take?” loop.
the idea is simple: short daily lessons for ai concepts, beginner-friendly, and structured enough that you can build the habit without getting buried in random youtube videos or 20-hour courses
i don’t think it replaces actually building projects but i wanted to make the starting point less overwhelming.
would love any feedback from people here if anyone wants to try it. thanks
Trying to Learn Practical AI Workflows From Scratch Instead of Isolated Tool Tutorials
Over the last two months I’ve been trying to build a small AI-powered research and note organization workflow for myself, mainly to summarize long PDFs, compare information across multiple sources, and organize notes from work and online learning. I started with ChatGPT because it was the easiest entry point, but very quickly I found myself going down a rabbit hole of tutorials recommending completely different tools and setups. Some people swear by Claude for long-context document analysis, others recommend Perplexity for research, NotebookLM for source grounding, Ollama for local models, and n8n for automations. I also tested Accio Work recently because I was curious how AI workflow tools are handling research and task coordination in one place rather than across disconnected apps. What’s been frustrating is that most beginner resources explain each tool individually without showing how experienced users actually combine them into a practical end-to-end workflow.
I’m not trying to become a machine learning engineer or train custom models from scratch. What I want is a realistic understanding of how people structure usable AI workflows once projects become larger and more organized. Things like document storage, prompt management, comparing outputs between models, and deciding when running a local model is actually worth the extra setup and hardware requirements.
Has anyone here found a genuinely practical course, creator, Discord, or YouTube series that teaches this in a structured way for beginners?
>>>> THIS IS FOR EVERYONE -- NOT JUST 20+ YEARS OF EXPERIENCED GATE KEEPING IT/SOFTWARE ENGINEER VETERANS. <<<<<<<
I used to open Claude Code, describe what I wanted, and let it go. Sometimes it worked. Sometimes I'd look up and realize it had refactored three files I didn't ask it to touch, introduced a new abstraction I didn't need, and broken something that was working fine.
The fix wasn't better prompts. It was plan mode.
What plan mode actually does
You type `/plan` before starting any non-trivial task. Claude switches into research-only mode — it can read files, search the codebase, run grep, trace dependencies. But it can't edit anything. It writes a plan to a markdown file, then waits for your approval before touching a single line of code.
That forced pause is everything. Instead of "here's what I built for you," it's "here's what I'm thinking — does this match what you want?"
How I use it daily
Every task with more than 2-3 steps gets plan mode. The workflow:
- I describe the task
- Claude explores the codebase — reads relevant files, traces how the feature currently works, checks for existing patterns it should reuse
- It writes a plan: which files to modify, what approach to take, what to test
- I read the plan, push back on anything that doesn't make sense, approve or redirect
- Then it executes against the approved plan
Step 4 is where the real value is. I've caught Claude about to introduce a new utility function when one already existed 3 files away. I've caught it planning to refactor a working system instead of making the minimal change I actually needed. I've redirected it from a 12-file change to a 2-file change because the plan made the scope visible before any code was written.
The mindset shift
Before plan mode, I was reviewing code after it was written — trying to catch problems in a diff. That's expensive. You're already invested in the approach, and "undo all of this" feels wasteful.
With plan mode, I'm reviewing the approach before any code exists. Redirecting a plan costs nothing. Redirecting a completed implementation costs everything.
When I skip it
Single-file changes, typo fixes, "add this CSS class" — anything where the scope is obvious and the blast radius is one file. Plan mode adds overhead that isn't worth it for 30-second tasks.
But anything that touches multiple files, involves architectural decisions, or could break something else? Plan mode, every time. The 2 minutes I spend reading the plan saves me 20 minutes undoing work I didn't ask for.
The pattern I keep coming back to: make Claude think before it builds. The code is almost always better when there's a plan behind it.
N8N automation for lead generation
I want to build n8n automation to extract emails / linkedin id / phone numbers for marketing and lead generation where scrapping being the primary problem. I have LinkedIn premium too.
How can I exploit n8n. Any ideas?
Started with a brand foundation doc (positioning, audience, visual direction), then fed it into ChatGPT and worked through the whole stack:
- Logo → packaging → brand guidelines
- Instagram poster, Amazon A+ images, EDM
- Website homepage + product detail page
- Ad storyboard → video via Seedance 2.0
The key thing that made it work: paste your brand foundation into every prompt. That's what keeps everything visually consistent across all the outputs.
One tip — if you're resizing an image and don't want the copy to change, explicitly say "keep the copy unchanged." Otherwise it rewrites everything to fit the new layout.
ChatGPT vs Gemini: ChatGPT understood the brand context better with less prompting. Gemini needed more detailed instructions but was fine once I fed it the guidelines ChatGPT generated.
Not everything came out perfect on the first try — had to fix a missing ingredient in a poster, swap out two images in the EDM — but as a starting point it's way faster than the traditional process.
Happy to share the prompts if anyone wants them
Want to learn Ai and ML non technical background
Hey everyone i am an electrical engineer and i only did a diploma and i am 23 yrs old currently working in samsung display noida and i want to switch to Ai ML because i don't want to jo industrial job... Can anyone help me and give me proper roadmap and is it suitable for me? Will I get a job and how much time?
I want to work in the field of AI, but I’m at a very beginner level.
On the computing side, I’ve studied low-level topics (because even if I’m not going to specialize in that, a real Computer Scientist/Engineer should understand how the entire computer system works). So far, I’ve read the books: Computer Organization and Architecture (Stallings), Operating Systems (Silberschatz), Computer Networks (Tanenbaum), Compilers (the Dragon Book), and I know the basics of Python.
On the math side, I’m not even a quarter of the way through a Calculus book, and I still need Linear Algebra and Statistics.
I know I’m at the beginning of the road, and I still have a long way to go. Could anyone recommend books on Artificial Intelligence for beginners that are exactly at my level?
How do AI engineers actually evaluate LLM/RAG systems in practice?
I’ve built multiple LLM/AI projects so far, but I realized I never properly learned how evaluation is actually done in real AI engineering workflows.
Recently I’ve been reading AI Engineering by Chip Huyen, and one thing that stood out was the idea that you should evaluate every layer of the system, not just the final output:
- prompts
- retrieval quality in RAG
- chunking
- reranking
- hallucinations
- latency/cost
- end-to-end answer quality
- AI-as-a-judge systems, etc.
What I’m confused about is how this is actually done in practice by engineers.
For example:
- Do people usually create their own eval datasets?
- Or do you use public benchmark datasets?
- How do you evaluate retrieval quality specifically?
- How are prompts compared systematically?
- How much of evaluation is automated vs human review?
- What tools/platforms are commonly used in industry right now?
- Are frameworks like Ragas, DeepEval, LangSmith, TruLens, etc. actually used in production?
- How do teams prevent regressions when changing prompts/models/chunking strategies?
I think I’m missing the “engineering mindset” around evaluation. Until now I’ve mostly been doing:
>the outputs look good enough
But I want to learn how people build reliable evaluation pipelines and iterate systematically.
Would really appreciate:
- practical workflows
- examples from real projects
- beginner-friendly resources
- advice on what I should build to learn this properly
Especially interested in RAG + agent evaluation.
Thanks!