the InterviewCoder guide
The questions we get most in this sub are: what is InterviewCoder, how does it work, and how do the proctoring platforms catch people. This post covers all three. Structure: (1) what the product is and how to use it, (2) how HackerRank tracks candidates in 2026, (3) how CodeSignal tracks candidates, (4) where the detection has blind spots, (5) practical advice whether or not you use a tool, (6) why it was built and the product itself.
Part 1. What InterviewCoder is and how to use it
InterviewCoder is a desktop application for macOS and Windows that runs as an overlay during technical interviews and online assessments. It listens to the interviewer's audio (or reads the on-screen problem), runs the question through an AI model, and displays a solution outline, code, and walkthrough in a transparent overlay that is not captured by screen-share or screen-recording.
The architecture rests on four properties:
- The window is excluded from display capture at the OS compositor level (macOS window flags, Windows WDA_EXCLUDEFROMCAPTURE).
- The process does not register a dock icon, menu-bar icon, or taskbar entry.
- The process name on disk is non-descriptive, so a process scan does not surface "Interview Coder."
- The overlay is click-through. It does not steal focus from the assessment window.
These four properties together are why the app does not show up in HackerRank, CodeSignal, CoderPad, Codility, Zoom, Google Meet, or Microsoft Teams screen shares.
How to install and set up
- Download the Mac (.dmg) or Windows (.exe) build from interviewcoder.co.
- Install it like any other desktop app.
- Launch it. It runs in the background. You confirm it's running by the keyboard shortcut, not by a visible window or icon.
- Sign in. Your subscription credits live on the account.
- Open whatever assessment platform or video call you're using. Start the screen share if the platform requires one.
- Trigger the overlay with the global keyboard shortcut. The overlay renders on top of everything on your screen but is invisible to the capture pipeline.
How to use it during a session
Two modes:
Audio mode. The app listens to system audio (interviewer voice through your speakers, headphones, or call audio), transcribes it, and responds. Use this for live interviews where someone is reading you the problem.
Screen mode. The app captures the visible problem statement from your own screen, runs it through the model, and surfaces the solution. Use this for OAs and self-paced assessments where the question is on the page.
The flow during a live coding round:
- The question is read or shown to you.
- The app produces a solution outline, the code, and a walkthrough of the approach.
- You read it,take a moment to analyse it and type it yourself. You do not paste, because paste events are logged and will get caught.
- You talk through your reasoning out loud as you implement. To make it seem like you are the one that figured out the solution .
Use cases
- Live coding rounds on HackerRank Live, CoderPad, Zoom-shared editors, Google Meet shared docs.
- Asynchronous OAs on HackerRank, CodeSignal, Codility, and internal platforms.
- System design rounds where you need scaffolding for tradeoffs, capacity estimation, and component breakdown.
- Behavioral rounds where you need a STAR-format response on the fly.
- Take-homes where you want a sanity check on your approach before submitting.
When it does not work
- In-person assessments with a physical proctor in the room. A digital overlay does nothing against a human watching your monitor.
Part 2. How HackerRank tracks you
HackerRank's integrity stack has three layers: proctoring telemetry, structural code analysis (MOSS), and a behavioral ML model that ties them together.
Browser focus and tab tracking. Every time the assessment tab loses focus (Alt-Tab, Cmd-Tab, clicking another window, exiting full-screen), the event is timestamped and logged. Companies set policies on top of this. Some flag on the first switch, most use a cumulative threshold (typically 3+ switches in a session triggers review). The system also looks for patterns. Regular intervals between switches read as systematic and weight the suspicion score harder than random ones. In Secure Mode, the browser is locked down further: copy-paste blocked, right-click blocked, dev tools blocked.
MOSS (Measure of Software Similarity). Enabled by default on every test. MOSS tokenizes your submitted code, strips out names, whitespace, and comments, and compares the structural fingerprint against a database of past submissions plus public sources (GitHub, Stack Overflow, leaked OA banks). Renaming variables, reordering lines, adding whitespace. None of it works. MOSS sees the AST, not the surface code.
The behavioral ML model.ackerRank moved past MOSS as their primary signal because false positives were too high and AI-generated code wasn't being caught structurally. The current system fuses signals: tab focus events, copy-paste frequency, keystroke dynamics, time-to-solve, and code-iteration patterns. The signs it picks up on:
- Sudden bursts of clean code with no trial-and-error.
- Unusual pause distributions.
- Lack of incremental debugging.
- Time-to-solve anomalies. Ie. solving a LC Hard in 4 minutes flags or solving a Medium in 90 seconds flags.
HackerRank's current ML model self-reports ~93% accuracy on suspicious-submission detection. But that number is what they publish. Production false positive rates aren't disclosed.
Copy-paste tracking. Every paste event is logged with frequency and (in proctored mode) what was on the clipboard. Pasting your own variable names from a scratchpad still counts as an event.
Image and webcam capture. When proctored mode is on, the webcam takes periodic snapshots, runs face detection for "is the same person here," and looks for second faces, glances off-camera, and missing-face frames.
Session metadata. IPs, geolocation, device fingerprints, browser fingerprints, account history correlation. Multiple candidates from the same IP during overlapping assessment windows is one of the top auto-flags.
Part 3. How CodeSignal tracks you
CodeSignal is more aggressive than HackerRank because their flagship product (Certified Evaluations) requires full proctoring as a feature, not an option.
Mandatory entire-screen recording. When you start a proctored CodeSignal session, you're required to share your entire screen. Not a tab, not a window. Anything that renders on that screen is in the recording: notifications, dock icons, browser tabs you switch to, and any application that draws to your display.
Webcam and microphone for the full session. Both are required. The webcam records continuously, not snapshots. CodeSignal's review team looks for: people walking through frame, candidate looking off-camera in one direction (suggests a second screen), audio of someone speaking answers, audio of typing that doesn't match on-screen typing.
Government ID verification. You upload a photo of a government-issued ID and a selfie. CodeSignal staff verify the match before the result is released.
The Suspicion Score. The CodeSignal-specific signal. It's an aggregated trust score per session, fed by:
- Typing cadence vs the candidate's own warmup baseline
- Mouse movement entropy
- Focus events
- Copy-paste events (CodeSignal records what was copied, not just that copying happened)
- Audio anomalies
- Webcam anomalies
- Code similarity to known solutions
The score determines whether the result auto-verifies or gets pulled into manual review. Manual review is a 1-3 business day process where a CodeSignal proctoring specialist watches the recording end-to-end.
Browser lockdown. CodeSignal's environment can disable copy-paste, block tab switching at the browser level, monitor running processes for screen-share or remote-access indicators (TeamViewer, AnyDesk, Zoom screen-share if it's not theirs), and block browser extensions.
Telemetry from work simulations. CodeSignal's newer assessments use "work simulation" environments that capture more than typing. They measure how you navigate the IDE, how you read the problem, mouse pathing across the spec, and time on each subtask. They compare this to a baseline of candidates working unaided.
Data retention. Recording and ID data is stored for 15 days then deleted. CodeSignal does not share the raw recording with the hiring company. Only a verification result and flag summary.
Part 4. Where the detection has blind spots
- Anything outside the screen-share API is invisible. Both platforms can only see what your OS reports as part of the captured display. Hardware-layer overlays, OS-level compositor tricks, and processes that opt out of capture (on macOS via specific window flags, on Windows via WDA_EXCLUDEFROMCAPTURE) don't show up in the recording even though you can see them on your monitor.
- Audio capture is browser-level. They hear your microphone, not your speakers. A second device (phone, tablet) sitting next to you that you read from silently is not picked up by their pipeline. The webcam might catch your eyes glancing. That's the constraint.
- Behavioral models need a baseline. Without prior keystroke data on you, a first-time candidate's typing pattern only flags on extremes (zero pauses, clean bursts). Pasting code in chunks rather than wholesale, with edits between, stays under threshold most of the time.
- MOSS needs something to match. Original solutions to original problems generate no MOSS signal. The risk is from public-archive matches, not from your code being "too good."
- Webcam detection is coarse. It can detect "second face in frame" and "no face for 30 seconds." It does not run gaze-tracking accurate enough to know if you're reading off a second monitor.
Part 5. Practical advice for anyone taking these assessments
- Type incrementally even when you know the answer. Write a stub, run it broken, fix it, run again. The behavioral model cares more about rhythm than code.
- Don't paste even your own snippets from a scratchpad. Every paste event is logged, instead type it.
- Keep your face centered and your eyes on the screen. Webcam anomalies are the #1 source of manual-review escalations on CodeSignal.
- Stay in full-screen. Cmd-Tab and Alt-Tab leave timestamps. If you need to look something up that the assessment allows, do it through the assessment's own browser instance.
- Talk through your thinking out loud, even on solo OAs. Audio of you reasoning is the strongest signal for you in a manual review.
- Run your tests visibly. Use the platform's built-in test runner. Manual print statements and test invocations are evidence of real work.
- Close every non-essential process. Process scans flag more than you'd think (Discord overlay, Nvidia overlay, screen-recording software you forgot was running).
- Match your warmup typing speed to your assessment typing speed. A candidate who's 40 wpm in warmup and 110 wpm during the test gets flagged.
Part 6. Why it was built and what's in the product
Every mechanism in Parts 2 and 3 has a shape, and that shape can be addressed at the OS layer instead of the application layer. The browser-based defenses (focus events, screen-share API, mic hooks, copy-paste interception) only see what the browser sees. A native application that opts out of display capture, runs without an icon, captures audio through an OS-level pipeline, and stays click-through is outside that detection surface by design.
That is the entire reason InterviewCoder exists. It is a native desktop binary written against the OS APIs that control display capture and audio routing.
What's in the product:
- Audio mode and screen mode (covered in Part 1)
- Coding assistance covering algorithms, system design, behavioral, full-stack, ML, data, trading, product, and consulting interviews
- Coverage for HackerRank, CodeSignal, CoderPad, Codility, Zoom, Google Meet, Microsoft Teams, Webex, Chime, Lark
- macOS (Apple Silicon) and Windows builds
- Daily detection testing against the major platforms, with a status indicator on the site
Plans:
- Free tier: download the app, explore the interface, basic features.
- Monthly Pro: $299/month. 1,000 monthly credits, full model access, 24/7 support.
- Lifetime Pro: $799 one-time. Unlimited lifetime access.
The pricing is higher than most prep tools because the cost structure is different. Standard prep tools charge $20-50/month because they ship a question bank and a video player. InterviewCoder ships a native binary that has to keep up with OS updates, capture-API changes, and platform-side detection updates on macOS and Windows. The team is small and the testing surface is large. The price reflects what it costs to keep the bypass working in 2026.
If you have questions about specific platforms (CoderPad, Codility, HireVue, ByteBoard), drop them in the comments. We'll keep this post updated as detection methods evolve.