u/GradientCastTeam

The algo round is dead at FAANG for ML Engineers. What replaced it (from someone running the loops)

Two years ago I helped a friend prep for a Meta ML engineer loop. We did 200 LeetCode problems together. He was sharp, fast, would solve mediums in under twelve minutes. He didn't get the offer.

When the recruiter walked him through the debrief, the feedback was strange. The coding round had been "fine". Not a red flag, not a strong signal. What sank him was the system design round. The interviewer had given him an ambiguous problem about a recommender producing biased outputs and asked what he'd do in the next thirty minutes. He defaulted to architecture. The interviewer kept pulling him back: "what would you do first?"

I've been on the other side of that table for the past two years, running ML system design and behavioral rounds at Meta. What happened to my friend is happening to a lot of strong engineers, and most of them don't know why. They're prepping for a loop that doesn't exist anymore.

What changed: the algo round isn't dead, but its weight in hire/no-hire collapsed. The reason isn't that algorithms stopped mattering — it's that AI got too good at them. By 2025 GPT-class models could solve ~80% of LeetCode mediums first try. The skill "produce a clean implementation of two-sum in 12 minutes" stopped predicting anything useful. So companies shifted weight to what AI can't fake yet — judgment, system reasoning, AI-collaboration literacy, communication.

This shift wasn't announced. But anyone on a hiring committee in the past 18 months can tell you it happened. The debriefs sound different. The candidates who get strong-hire recs look different from the ones who got them in 2023.

What's actually being tested in 2026:

  1. Reasoning under uncertainty. Deliberately ambiguous problems with no clean answer. The interviewer watches how you decompose, prioritize, name tradeoffs. Pattern-matching to a template = filtered out. Slowing down, asking questions, reasoning out loud = offer.
  2. System design at production scale. Not "design Twitter" — "design Twitter with a 100ms latency budget, a degrading existing model, a downstream team you can't change, and a vendor you can't swap." Real constraints, real production decisions.
  3. AI-collaboration literacy. New in the last 12 months. Can you tell when a model's output is plausible-but-wrong? Do you verify or trust? Becoming an explicit scoring dimension at Anthropic, OpenAI, increasingly Google/Meta.
  4. Communication / thinking out loud. The candidates who get strong-hire reports almost always reason out loud constantly. Not narrating — *reasoning*. Making assumptions visible. Flagging uncertainty.

What an E5 Meta ML loop looks like now vs 2022:

2022: two LeetCode rounds, one broad system design, one ML theory round, one behavioral.

2026: one shorter coding round (pseudocode often fine for harder parts), one heavily-constrained system design round, one ML deep-dive on a system you've actually built, one meaningfully harder behavioral.

If I were prepping for a FAANG ML loop today, time allocation:

- 30% system design (real-system reasoning, not whiteboarding)
- 25% behavioral (real story practice, not STAR memorization)
- 20% ML depth on systems you've actually shipped
- 15% coding (~60–80 well-chosen mediums is enough; diminishing returns past that)
- 10% AI-collaboration practice (new — practice reasoning out loud while using AI tools)

The mindset shift: the interview is no longer testing what you know. It's testing how you reason.

LeetCode was about pattern recognition. Modern ML loops aren't. They're about reasoning through novel problems where the pattern doesn't exist. The candidates who get offers slow down when they don't know, ask clarifying questions, name assumptions, propose multiple approaches, pick one with explicit tradeoffs — out loud.

If you've been shipping production ML systems, you already have the skills. You just need to practice expressing them out loud, under pressure, in 45 minutes.

Grind less. Reason more. Talk out loud.

---

Full version with the worked example (Meta E5 round breakdown phase-by-phase) at gradientcast.com/insights/why-faang-killed-the-algo-round

---

reddit.com
u/GradientCastTeam — 1 day ago