r/EngineeringManagers

▲ 2 r/EngineeringManagers+2 crossposts

How are startups adapting technical assessments now that candidates use AI anyway? i will not promote

i will not promote

Curious how other startups are dealing with this.

It feels like technical assessments got a lot messier once AI became part of how people actually work.

A lot of coding tests and take-homes were designed for a world where the candidate was basically working on their own. That’s just not really true anymore. Many candidates are using AI in some form, whether companies explicitly allow it or not.

And I’m not even sure the old approaches make sense now.

If you ban AI completely, the assessment can feel kind of artificial, especially if the actual job involves using AI tools all the time.

If you allow it without changing anything, then the signal can get pretty noisy. A polished submission doesn’t necessarily mean the candidate really understood the problem. It could also mean they were good at getting plausible output quickly.

For a startup, that matters a lot because weak screening costs real time. You either pass good people too early, or spend founder / engineer time interviewing people who looked stronger on paper than they really are.

So I’m curious what people are actually doing in practice.

  • Are you allowing AI in coding assessments or take-homes?
  • Have you changed the format because of it?
  • Are you still mostly judging the final output?
  • Or are you trying to look at judgment/process somehow too?

Would love to hear from founders or hiring managers who are actually hiring engineers right now. Mostly interested in what’s working in real life, not ideal theory.

reddit.com
u/snoopdoge111 — 1 hour ago

The metric that finally made our sprint planning predictable

When I was a team lead, sprint planning was the single most frustrating part of the job.

Every planning meeting looked the same. Engineers gave estimates in hours, we filled the sprint to the top, everyone nodded, and two weeks later we missed the release date. Then we would do the retro, promise to "be more realistic", and repeat the exact same thing next sprint.

My department head finally sat me down and taught me Focus Factor. It sounds obvious when you read it but I had been ignoring it in practice: a developer does not write product code 8 hours a day. There are DSMs, code reviews, estimation sessions, planning, demo, retro, support tickets, bug fixes, fly-in tasks, shortened days, other projects, vacations. Once you subtract all of that, real product time is usually 50-75% of the calendar.

https://preview.redd.it/pla7b7ecshwg1.png?width=1362&format=png&auto=webp&s=3f7ed6f47d55c9d402adfdc26b7cb7eaf715afcb

So we stopped planning against "working days × 8" and started planning from the Focus Factor.

The formulas are simple:

  • Total Time = working days × 8
  • Task Time = Total - (Processes + Support + Meetings + Fly-in + Wind)
  • Focus Factor = Task Time / Total Time

We also tracked:

  • Stability = 1 - (Fly-in / Total) - below 0.85 means the sprint is getting broken into by unplanned work
  • Support Ratio = Support / Total - above 20-25% means the product is probably unstable
  • Adjusted Focus = Task Time / (Total - Wind) - fair for people who were only partially available

We started in Excel. Formulas and colored cells: red < 50%, yellow 50-74%, green 75%+. Then we put the metric on a team dashboard and compared sprints over time.

The effect was fast. Release dates became achievable because we stopped planning 100% of calendar time. Sprint goals stopped being wishes. Nobody was secretly overloaded because the overload was visible in the numbers before the sprint even started. And when support ratio spiked, we treated it as a system signal (quality, WIP control) instead of blaming someone.

The key mindset shift: Focus Factor is not a productivity metric for people. It is an indicator of organizational noise.

Maintaining the spreadsheet and rebuilding comparisons by hand got annoying, so I wrote a small web app for it.

https://preview.redd.it/80ezljxgshwg1.png?width=1329&format=png&auto=webp&s=c2132cf86ea25374f82d6488671fe1a53ccd8818

It does Focus Factor per member with live recalculation, sprint report with stability / support ratio / adjusted focus, sprint-vs-sprint compare with deltas, team-level trends, and a shareable public report link you can drop in a work chat for sprint review. Jira integration is next.

https://preview.redd.it/e040artjshwg1.png?width=1312&format=png&auto=webp&s=6d9c02ecc149c6220cd9ff2bfc9edfa1381f153e

Kotlin + Spring Boot backend, React + TypeScript frontend. Two demo teams seeded at startup so you can play with it in 2 minutes. EN and ES locales. Light / dark theme.

Curious what actually worked for you. What metrics or tools genuinely improved sprint planning for your team - not what sounded good in a process doc, but what you kept using after six months?

reddit.com
u/Tiana_Dev — 3 hours ago
▲ 2 r/EngineeringManagers+1 crossposts

Rolling out Claude Code to 15 devs — Vertex + LiteLLM instead of direct API. Good idea or overkill?

Hey, we're in the process of rolling out Claude Code to our 15-dev team and figuring out the right architecture before we commit.

Instead of going direct API, we're leaning toward routing through LiteLLM + Google Vertex AI — mainly for token visibility per dev, model flexibility without touching everyone's config, and audit logs

for compliance. Anyone running Claude Code through a proxy layer like this? How's the latency in practice, and is the observability actually worth it day to day?

---

Second thing: to standardize how the team uses Claude Code, we're

putting together an internal plugin that bundles our own skills, hooks,

and workflows so everyone installs the same thing from our repo instead

of each dev reinventing their setup. Think code review workflows, testing patterns, commit hooks — stuff that should be consistent across the team.

Has anyone maintained something like this long-term? Curious whether it actually sticks or becomes a ghost repo nobody touches after month 2.

reddit.com
u/Due_Progress_7815 — 1 day ago

How Do I Transition from Software Developer to Engineering Manager?

I have a bachelor’s in CS, about 1.5 years as an SDET and 1.5 years as a software dev. I got promoted to SDE II a few months ago at an average SaaS company (not FAANG/big tech).

I wouldn’t call myself super technical, but I get the job done. My tech leads/managers are happy with my work, and I’m usually one of the stronger devs on my team. I’ve also been told I have good leadership qualities and could make a solid manager in a few years.

Part of why I’m interested in management is that I don’t really see myself becoming a “tech wizard” like most tech leads I see. I’m more interested in the business side of things than constantly chasing the latest tech (though I do try to stay up to date on what’s widely adopted) or deeply understanding the ins and outs of our stack. Also, from what I’ve seen, management roles tend to pay a bit more, which doesn’t hurt.

From LinkedIn and job postings, it looks like the typical path is staying an IC for ~4–8 years before switching. Some roles mention a master’s as preferred, but not as a requirement. I've also seen current managers with MBAs so I’m not sure how much graduate degrees actually matter. I'd like to make the switch ASAP, even if I expect it to take a couple of years or more.

My company will cover part-time studies, so I’m considering either a master’s in CS/software engineering (or another tech field) or an MBA, but I’m not sure which would make more sense for my goal.

TL;DR: Software dev with ~3 years of experience and a BS in CS. I want to move into management eventually. Should I go for a master’s in CS/SE, an MBA, or something else if my employer is paying for it?

reddit.com
u/Noobdle — 2 days ago

L6 SDM -&gt; Sr. IC (outside) with an $80k pay cut to escape babysitting. Am I crazy?

I’m currently an Amazon L6 SDM, and I am completely burnt out from babysitting a team of immature Gen Z / late millennials. I’ve slogged day and night coaching, guiding, and listening to endless whining just to ensure business goals are met. The cost ? I’ve completely neglected my own personal development and regressed technically. ALL THIS WHILE BEING ON A VISA !!

To add insult to injury, I just got a bottom band rating (just escaped LE). I don't even blame my manager— I would have done the same in their shoes. The Amazon management process is just fundamentally flawed, and it's a pathetic position to be in right now.

I now have an offer from another company as a Senior IC. It’s a chance to rebuild my technical chops, stop playing agony aunt, and actually become hire-able again. The catch: it comes with a $60k - $80k annual pay cut .

Am I doing the right thing by taking this TC hit to save my sanity, step away from the toddler wrangling, and future-proof my IC skills ?

reddit.com
u/Agitated-Web-7955 — 2 days ago
🔥 Hot ▲ 60 r/EngineeringManagers

Retaining random high performer

Hi managers,

Have you ever had an engineer you hired that made you think "why would this genius want to work here?" But didn't press the matter.

If so, did he/she stay with the company? How were you able to keep your unicorn?

reddit.com
u/Tiredof304s — 4 days ago
▲ 1 r/EngineeringManagers+1 crossposts

Fix Your Planning and Stop Missing Deadlines: Why Story Points Win

Hi Everyone, here I share my experience and recommendations regarding development work estimation and planning. This is what usually works for my teams but I'd like to receive some feedback if others experience similar problems, how do you resolve them, and does it correlate with what I'm proposing in the article?

bastrich.tech
u/areklanga — 20 hours ago

Clients asking about AI coding platform enterprise deployments and we have no good answers yet

Three of our mid-market clients (300–800 employees each) have asked us in the last month to help evaluate and deploy AI coding platforms. The pattern is striking enough that I'm wondering if other MSPs are seeing the same thing.

Client A is in healthcare. They need HIPAA-compliant AI coding tools, want on-prem deployment, and have 120 developers.

Client B is a defense contractor that needs air-gapped deployment and wants the tool to actually understand their codebase before making suggestions.

Client C is in financial services with around 200 developers. They're currently spending $15k/month on Copilot inference and leadership wants that cut in half.

What's interesting is none of these conversations are saying, should we use AI coding tools. They've already decided yes. The questions are about how to deploy securely, how to manage costs, and how to actually govern usage across teams.

Is there enough consistent demand here to build a formal practice around this? And for those already doing it, what tools are enterprises actually choosing once compliance requirements enter the picture?

reddit.com
u/AccountEngineer — 5 days ago

I stopped asking my engineers for status updates. I just read their agent traces now.

Last September my head of engineering left. I didn't backfill the role and figured we didn't need one. That was a mistake. The engineering process slowly fell apart and I didn't notice until it was bad.

We migrated from Jira to Linear thinking a nicer UI would fix things. It didn't. Tried daily standups and engineers hated them because of timezone spread. Moved to every-other-day. Then async standups on Slack. None of it stuck.

I found myself pinging every dev individually for status updates. These are senior, 10x engineers. When they hit a blocker, they'd rather spend 3 hours solving it themselves than post in Slack.

They're also mostly unaware of what each other is working on. Everyone's burned out on Slack and meetings.

Like many orgs today, all of our devs use Claude Code. One of our strongest engineers told me straight up: "I'd rather collaborate with 6 Claude Code agents than coordinate with teammates."

I laughed, but he wasn't joking.

I thought about it, and a week ago we built an internal tool that logs traces from all our coding agents and creates a shared memory layer for all team members. The result surprised me.

It's not perfect and it's early, but it's the first thing that's actually reduced the communication tax instead of just reshuffling it.

For those of you managing teams that are deep into AI coding tools. how are you handling the coordination problem? Are agents changing how your team communicates, or is it still all Slack and standups?

reddit.com
u/davidbun — 5 days ago

AI has made my job boring

Most things I did as an EM I can now do with an AI. It’s great because I have better 1:1s with my reports and partner teams, don’t need to waste hours to understand what’s going on and can really focus on what matters. But somehow it feels dull. Progression has halted at my company since the promised AI productivity gains led to hiring freezes and the only way to get promoted at my company was empire building. Where do you guys find your excitement in the job these days? It was fun trying out all the new AI tools at first but now I feel a little empty. Considering converting to IC, TLM or going into sales eng or consulting.

reddit.com
u/pyt1m — 7 days ago

AI adoption success stories - how did you get there?

Like most companies, I am being pushed to "use AI to build something". Each team is on their own, with no central strategy, or success / failure criteria. No shared knowledge, no proper tracking of tokens, nothing. Just - here's Claude Code, look at how other companies did X, Y, Z, and we need to change the world.

Question to those with any degree of successful AI adoption at your workplace - how did that go, and what would you do differently if starting from scratch?

How much was training v/s incentivizing AI based outcomes? How are you measuring ROI on AI spend? What guardrails have you put in place to avoid costly mistakes? How much upskilling did you rely on v/s getting an outside "expert" to pilot the team?

If there are just 3 things that were / could be the most impactful in the AI adoption story, what would those be?

reddit.com
u/Beneficial_Sir_8166 — 6 days ago
🔥 Hot ▲ 59 r/EngineeringManagers

Cognitive load shift from doing work to checking AI work product

I found this article on WSJ from Katherine Blunt to be quite useful.

Gist - AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.

As chatbots and agents grow more powerful and ubiquitous, recognizing the moments when they go rogue can be tricky.

One of the comments on the article stood out to me -

… AI displaces the cognitive load from the actual doing of work to checking AI generated output …

Does that mean that people are spending more effort/focus on QA or increasing how much testing IC devs do?

wsj.com
u/pvatokahu — 6 days ago
▲ 2 r/EngineeringManagers+1 crossposts

We analyzed 211M lines of code to understand what AI is actually doing to engineering teams. Here's what we found.

We partnered with GitClear and spent the last year talking to hundreds of engineering teams about AI adoption.

The common thread: everyone knows AI is changing things, but nobody is sure if it's helping or hurting. The data surprised us. Code duplication has increased tenfold since 2022.

Refactoring dropped below duplication rates for the first time. And developers feel more productive than ever, even as the codebase gets harder to maintain.

That gap between perceived productivity and actual code health is the thing nobody is measuring, and it's the thing that will bite you in 6 months when velocity tanks for reasons nobody can explain. We put together a framework built around 4 measurement layers: direct AI usage, code health indicators, developer experience signals, and business outcomes. None of them tell the whole story alone. Together they act like a dashboard, because you wouldn't ignore the fuel gauge just because the speedometer looks good.

A few things from the research that stuck with us:

• 79% of code changes now touch code written less than a month ago (up from 70% in 2020). Fast iteration or fast rework? You need to know which.

• Developer experience metrics are leading indicators, typically months ahead of delivery impact. By the time velocity drops, the warning signs were already there.

• One engineering manager put it well: "My developers are flying through tickets, but they can't explain how their code works."

That's a developer experience problem that becomes a velocity problem fast. We wrote up the full frameworks and playbooks in a free guide: "Quantifying the Impact of AI." If your org is trying to justify AI spend or figure out where it's actually creating value (vs. just feeling like it is), it might be worth a read.

youtu.be
u/GitKraken — 5 days ago

I’m wondering what managers across engineering fields and companies, particularly defense, consider experience?

BACKGROUND

I am not a manager. I work at one of the large defense companies. I am a lead systems engineer at my company. While my senior manager is an engineer. Other management I have engaged with are not. Which is fine. So I get asked to be involved with interviews for technical assessment.

It seems me and my senior manager are usually on the same page assessing technical principles. And I’ve seen engineers get hired for fairly high roles in other areas that I may slightly interact with. That I didn’t believe had the technical expertise for that role.

Listen it’s NONE OF MY BUSINESS who hires who in another area. But it led me to think about what is “experience”.

I understand that certain experience depends on positions needed. Breath(width) vs depth. And how a broader skillset is useful in certain areas vs depth with a more precise technical skill set is better for another.

QUESTION TOPIC

The trend I see is ‘years of service’ promotions/hirings are weighed more than contributions to engineering or contributions to the company.

I find ‘years of service’ and ‘contributions to engineering/company’ do not always correlate. I have seen 6 year engineers contribute 10x to the company in terms of real tangible metrics (ie money saved, turn around time etc)via process improvement, design changes, testing efficiency improvement. But seem to get passed up for promotions or hiring because someone has more years.

Is this a normal trend, or am I looking at this incorrectly because yes I am not a manager. And I’ve only worked at one company because I love my work and my manager has been an incredible mentor.

I find ‘years of service’ is inefficient.

I know everyone needs money. And you should get awarded for years of work. But I think it can hurt engineers growth. But then again people just job swap to get raises anyways.

Pardon the long explanation. Thanks again.

reddit.com
u/foehammer35 — 6 days ago
▲ 10 r/EngineeringManagers+1 crossposts

Open source desktop app for 1:1 prep and team briefs: no subscription, no cloud

I was solving this partially with Claude Code using custom skills that pull Slack and GitHub data and generate briefs. It worked, but felt disorganized without a visual layer. 

So I ported those Claude Code skills into a proper desktop app. Keepr is a Tauri app that connects to your Slack, GitHub, Jira, or Linear, and produces cited team pulses and 1:1 prep docs.          

A few things that mattered to me:                               

  • No subscription, no cloud. It's as simple as a Claude Code extension. Everything runs on your laptop. There's no backend, no account.
  • Supports direct API keys (Anthropic, OpenAI, OpenRouter) which is more performant than going through Claude Code's proxy. But it still works well with Claude Code too.     
  • Takes a few minutes depending on the volume of data to gather, synthesize, and analyze. Not instant, but thorough.

It's been useful for my own workflow. Feedback is welcome and I'd love contributions from the community. Planning to keep building this open source and keep it that way.

MIT licensed: https://github.com/keeprhq/keepr

reddit.com
u/mvmcode — 5 days ago

Technical/Non-Technical Engineering Manager - role or candidacy?

The terms Technical EM and Non-Technical EM, although they're commonly used in software field discussion, I've always been reluctant to use them as I'm still confused even today.

Are they referring to specific type of role? or specific person's candidacy/expertise?

Take one of my jobs as example. In that specific company, EM is a people manager role, who manages people, team, and team's operation, but not tech and engineering. Naturally in hiring, solid understanding in engineering and good knowledge in techs are nice to have bonus but not must-have criteria, many EMs in the company is not much diff from an average junior developer in terms of technicality. My hiring EM was one of the outliers, who used to be architect in few companies and "CTO" for a startup, published books about tech stack and infrastructure. He's still pretty sharp and stay connected in technicality, despite been in people focus role for years.

Rephrase:
So... is he a Technical EM (by candidacy/expertise) or a Non-Technical EM (by role)?

Whenever you come across the term "non-technical EM" in conversation, how would you interprete the message?

  1. EMs who're not well versed in tech/engineering? or
  2. EM role that's designated to be people focus (regardless of candidacy/expertise)? or
  3. No standard definition. He/she could mean either #1 or #2.
reddit.com
u/tallgeeseR — 7 days ago