How are startups adapting technical assessments now that candidates use AI anyway? i will not promote
i will not promote
Curious how other startups are dealing with this.
It feels like technical assessments got a lot messier once AI became part of how people actually work.
A lot of coding tests and take-homes were designed for a world where the candidate was basically working on their own. That’s just not really true anymore. Many candidates are using AI in some form, whether companies explicitly allow it or not.
And I’m not even sure the old approaches make sense now.
If you ban AI completely, the assessment can feel kind of artificial, especially if the actual job involves using AI tools all the time.
If you allow it without changing anything, then the signal can get pretty noisy. A polished submission doesn’t necessarily mean the candidate really understood the problem. It could also mean they were good at getting plausible output quickly.
For a startup, that matters a lot because weak screening costs real time. You either pass good people too early, or spend founder / engineer time interviewing people who looked stronger on paper than they really are.
So I’m curious what people are actually doing in practice.
- Are you allowing AI in coding assessments or take-homes?
- Have you changed the format because of it?
- Are you still mostly judging the final output?
- Or are you trying to look at judgment/process somehow too?
Would love to hear from founders or hiring managers who are actually hiring engineers right now. Mostly interested in what’s working in real life, not ideal theory.