u/Excellent-Number-104

▲ 6 r/learnmachinelearning+2 crossposts

How to prevent overfitting in your ML models — a practical checklist

Overfitting is one of the most common problems beginners hit when training machine learning models. Your training accuracy looks great but validation accuracy tanks. Here's how to fix it.

**What's actually happening:**

Your model is memorising the training data instead of learning patterns. It works perfectly on data it's seen, and fails on anything new.

**Practical fixes in order of ease:**

  1. **Get more data** — The most reliable fix. Overfitting shrinks when your dataset grows.

  2. **Simplify your model** — Fewer layers, fewer neurons, fewer features. Start simple and add complexity only when needed.

  3. **Regularisation** — Add L2 (Ridge) or L1 (Lasso) penalties to your loss function. In Keras: `kernel_regularizer=l2(0.001)`

  4. **Dropout** — Randomly deactivate neurons during training. Add `Dropout(0.3)` after dense layers.

  5. **Early stopping** — Stop training when validation loss stops improving:

`EarlyStopping(patience=5, restore_best_weights=True)`

  1. **Cross-validation** — Use k-fold CV instead of a single train/test split to get a honest picture of performance.

**Quick diagnostic:** Plot your training vs validation loss over epochs. If training loss keeps falling while validation loss rises, you're overfitting.

Which of these has worked best for you?

reddit.com

👋 Welcome to r/DevDepth - Introduce Yourself and Read First!

Hey everyone! I'm Jawad, a founding moderator of r/DevDepth.

This is our new home for all things related to AI and Machine Learning. We're excited to have you join us!

What to Post
Post anything that you think the community would find interesting, helpful, or inspiring. Feel free to share your thoughts, photos, or questions about programming, AI, machine learning etc. .

Community Vibe
We're all about being friendly, constructive, and inclusive. Let's build a space where everyone feels comfortable sharing and connecting.

How to Get Started

  1. Introduce yourself in the comments below.
  2. Post something today! Even a simple question can spark a great conversation.
  3. If you know someone who would love this community, invite them to join.
  4. Interested in helping out? We're always looking for new moderators, so feel free to reach out to me to apply.

Thanks for being part of the very first wave. Together, let's make r/DevDepth amazing.

reddit.com
u/Excellent-Number-104 — 2 days ago
▲ 2 r/ArtificialNtelligence+1 crossposts

Top Memory Management Hacks for Claude Code 🧠

Been using Claude Code daily for a few months now and figured out some tricks to keep it sharp across sessions. Here are my top hacks:

  1. Use CLAUDE.md as your project brain

This is the single biggest unlock. Drop a CLAUDE.md file in your project root with key context — architecture decisions, naming conventions, file structure notes, common commands. Claude Code reads this automatically at the start of every session, so you're not wasting context window repeating yourself. Keep it concise though — treat it like onboarding notes for a new dev, not a novel.

  1. Be aggressive with compact

The context window fills up fast during long coding sessions. When you notice responses getting slower or less accurate, use /compact to summarize the conversation and free up space. Pro tip: add a custom summary prompt like /compact focus on the auth refactor so it keeps the details that actually matter and dumps the rest.

  1. Scope your sessions tightly

Don't try to do everything in one conversation. Instead of "refactor the whole backend," break it into focused sessions — one for the database layer, one for the API routes, one for tests. Smaller sessions mean Claude Code keeps more relevant context in memory and makes fewer mistakes. Start each session with a clear goal and close it when that goal is done.

  1. Feed context strategically, not all at once

Dumping your entire codebase into the conversation is a rookie move. Instead, reference only the files Claude Code actually needs for the current task. Use @ mentions for specific files rather than broad directories. If Claude needs more context, it'll ask — and that targeted request uses way less of the window than a bulk dump would.

Bonus tip: If you're working on something complex, ask Claude Code to summarize its current understanding of the task before it starts coding. This catches misunderstandings early instead of burning half your context window on code you'll throw away.

These small habits genuinely changed how productive.

u/Excellent-Number-104 — 2 days ago
▲ 3 r/learnmachinelearning+1 crossposts

How to build a web scraper in Python using requests and BeautifulSoup (beginner friendly)

Title: How to build a web scraper in Python using requests and BeautifulSoup (beginner friendly)

Web scraping is one of the most practical skills you can learn in Python. Here's a step-by-step breakdown to get you started.

**What you need:**

`pip install requests beautifulsoup4`

**Step 1 — Fetch the page:**

```

import requests

from bs4 import BeautifulSoup

url = "https://books.toscrape.com"

response = requests.get(url)

soup = BeautifulSoup(response.text, "html.parser")

```

**Step 2 — Find the elements:**

Inspect the page in your browser (right-click > Inspect). Look for the HTML tag wrapping the content you want.

```

titles = soup.find_all("h3")

for t in titles:

print(t.find("a")["title"])

```

**Step 3 — Handle pagination:**

Most sites spread data across multiple pages. Look for a "next" button and loop through pages by changing the URL incrementally.

**Things to keep in mind:**

- Always check a site's robots.txt before scraping

- Add time.sleep(1) between requests to avoid hammering servers

- Use headers to mimic a real browser: `headers={"User-Agent": "Mozilla/5.0"}`

This pattern covers 80% of simple scraping tasks. Once you're comfortable, look into Scrapy for large-scale projects.

What sites have you tried scraping? Drop your questions below.

reddit.com
u/Excellent-Number-104 — 2 days ago
🔥 Hot ▲ 957 r/OpenAI+9 crossposts

An autonomous AI bot tried to organize a party in Manchester. It lied to sponsors and hallucinated catering.

Three developers gave an AI agent named Gaskell an email address, LinkedIn credentials, and one goal: organize a tech meetup. The result? The AI hallucinated professional details, lied to potential sponsors (including GCHQ), and tried to order £1,400 worth of catering it couldn't actually pay for. Despite the chaos, the AI successfully convinced 50 people, and a Guardian journalist, to attend the event.

theguardian.com
u/EchoOfOppenheimer — 3 days ago