u/Worldly_Manner_5273

Have the best engineers basically stopped applying, forcing companies into passive hiring?

My theory: a lot of the strongest engineers aren’t really “in the market” anymore in the traditional sense.

They’re employed, busy, selective, and only move when something unusually compelling shows up.

If that’s true, then the old model of post a job + wait for applicants is fundamentally broken for many senior tech roles.

On the other hand, passive outreach can also be noisy, invasive, and low-quality when it’s done badly.

So I’m curious from the experienced dev side:

Do you think the best candidates are mostly passive now?
Or is “passive talent” just recruiting mythology that sounds smarter than fixing comp, scope, and hiring speed?

reddit.com
u/Worldly_Manner_5273 — 3 hours ago
▲ 6 r/grok

I’m officially calling it: We’re the last generation that will actually know how to "think."

We’ve all joked about "AI brain fry" but the new research from CMU and MIT just turned it into a math problem.

The data shows that AI doesn't just assist. It reconditions your brain to expect an immediate answer. When that reward is delayed (i.e., when you have to actually think), your brain simply shuts down.

The stats are brutal:

  • Sample Size: 1,222 participants across math and reading tasks.
  • The "Cliff": After only 10 mins of use, unassisted performance crashed below the baseline of the control group.
  • Persistence: The first thing to go wasn't accuracy. It was the willingness to struggle. People literally stopped trying.

The researchers are calling it the "Boiling Frog" effect. We think we’re saving time, but we’re actually atrophying the cognitive muscles required for independent thought. We are building a world where "desirable difficulties" (the very thing that builds human skill) are being optimized out of existence.

The water isn't just warm. We’re already cooked.

Full breakdown of the study: https://synvoya.com/blog/2026-04-20-ai-boiling-frog-cognition-study/

Be honest: have you noticed yourself giving up faster on problems since you started using AI daily?

u/Worldly_Manner_5273 — 22 hours ago
▲ 3 r/AiChatGPT+1 crossposts

Claude Mythos 5 is 10 Trillion Parameters. We have officially hit the 'Energy Wall' and nobody is talking about it

Anthropic’s Claude Mythos 5 (rumored/leaked April 2026) is a beast at 10T parameters. Connect this to xAI’s "Colossus 2" supercomputer expansion to 1.5 gigawatts. Argue that we are burning a small country's worth of energy for "better poetry." ?

reddit.com
u/Worldly_Manner_5273 — 23 hours ago
▲ 1 r/grok

Why do ChatGPT & Gemini always pick 73, while Claude & Grok pick 42?

I’ve been testing something random but weirdly consistent.

Every time I ask different AI models:

“Pick a number from 0 to 100”

ChatGPT → 73

Gemini → 73

Claude → 42

Grok → 42

And the reasoning is even more interesting.

ChatGPT & Gemini usually justify 73 like this:

“Though, if you happen to be a fan of The Big Bang Theory, it’s considered the ‘best number’ because 73 is the 21st prime, its mirror 37 is the 12th, and 21 is 7 × 3. But for me, it’s just the luck of the algorithm.”

Claude & Grok go with 42, saying:

“It’s famously known as ‘the Answer to the Ultimate Question of Life, the Universe, and Everything’ from The Hitchhiker's Guide to the Galaxy.”

So now I’m curious

Is this actually randomness, or are these models subtly biased toward culturally “famous” numbers?

Has anyone else noticed this pattern?

reddit.com
u/Worldly_Manner_5273 — 3 days ago